**There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different patterns of predicted errors in homing. We describe an experiment that can help distinguish between these models. Participants wore a head-mounted display to carry out a homing task in immersive virtual reality. They viewed three long, thin, vertical poles and had to remember where they were in relation to the poles before being transported (virtually) to a new location in the scene from where they had to walk back to the original location. The experiment was conducted in both a rich-cue scene (a furnished room) and a sparse scene (no background and no floor or ceiling). As one would expect, in a rich-cue environment, the overall error was smaller, and in this case, the ability to separate the models was reduced. However, for the sparse-cue environment, the view-based model outperforms the reconstruction-based model. Specifically, the likelihood of the experimental data is similar to the likelihood of samples drawn from the view-based model (but assessed under both models), and this is not true for samples drawn from the reconstruction-based model.**

*multimodal*cognitive map (Tcheang, Bülthoff, & Burgess, 2011) as has people's ability to take an appropriate novel short cut between two points (path integration; Schinazi, Nardi, Newcombe, Shipley, & Epstein, 2013).

*d*′ of less than one (0.29 and 0.58), one had a

*d*′ of 1.46, and one had an infinite

*d*′, i.e., this participant (S4) was correct on all 96 trials. Unlike the participants S1–S4, participants S5–S8 were simply asked about whether they saw the green pole move between intervals at the very end of the experiment, and participants S6, S7, and S8 said they did not. S5 noticed the movement, and this participant shows a distinctly different navigation behavior in the green-pole-moving condition than in the green-pole-static conditions. In “Model comparison results” in Model comparison, we show a reanalysis of the data from S4 and S5, excluding all the green-pole-moving trials. The reanalysis shows that the conclusions about model comparison were not affected.

*p*s < 0.001).

*σ*).

*σ*as well as the separation between the cameras were free parameters when Pickup et al. (2013) optimized this reconstruction model using their data (but the rotation of the cameras was constrained so that they always faced the green pole). They found that two cameras placed at the maximum permitted separation (80 cm) best explained the navigation data in that paper (and a

*σ*of 0.0128 times the assumed focal length of the camera, see Pickup et al., 2013, for details). The logic of allowing such a wide baseline is that participants could move from side to side in the viewing zone up to a maximum of 80 cm, and this provided useful motion parallax, so the models should have access to this information too. The image error associated with each feature (

*σ*) results in a spread in the estimate of the location of the corresponding pole as shown in Figure 7.

*σ*) were fixed as above, but one parameter was allowed to vary to best fit the data for each participant (λ), defined in Equation 1 below. So, the steps required to determine the likelihood of a given data set under the reconstruction model were as follows:

- A metric reconstruction was carried out from a series of views in interval 1. The views were taken from a line within the viewing zone, orthogonal to a line from the center of the viewing zone to the center point between the two outer poles.
- From these views, we obtained a Gaussian representation of each pole's location in egocentric 2-D coordinates.
- The difference between the current location's egocentric pole representation and the goal point's egocentric representation was found using the Bhattacharyya distance
*d*. Details of how this was calculated are given in Pickup et al. (2013), and Figure 7 provides an illustration: If panels a and b were overlaid, there would be quite a large overlap between ellipses of the same color whereas the overlap between panels a and c would be much smaller and the Bhattacharyya distance correspondingly larger. Bhattacharyya distance is a standard measure of distance between probability distributions. Each pole was treated independently. An alternative to this approach that takes the relationship between poles into consideration is described in Pickup et al. (2013), but it bears some similarities to the view-based model, and we opted to test the extremes of the reconstruction- to view-based spectrum. - The (un-normalized)
*likelihood*of the location matching the goal point was taken to bewhere λ is the additional weighting parameter, which we allowed to vary across participants. It determines how quickly the likelihood should decay with the magnitude of the Bhattacharyya distance,\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\begin{equation}\tag{1}L \propto \exp \left\{ { - \lambda d} \right\},\end{equation}*d*(Pickup et al., 2013). - To turn the likelihoods into full probabilities that can be compared across trials, we ensured that the integral of the likelihood function across the (
*x*,*y*) plane is unity. For this, we estimated the integralwhere\begin{equation}\tag{2}Z = \int_x {\int_y {\exp } } \left\{ {\lambda d(x,y)} \right\}dxdy\end{equation}*d*(*x*,*y*) is the Bhattacharyya distance between the goal point's egocentric representation and the egocentric representation built at the point (*x*,*y*). - The total likelihood of the data set was found by multiplying together the normalized probabilities for each data point.

*ϕ*and

_{γ}*ϕ*refer to the largest and smallest angles between pairs of poles:

_{α}*θ*

_{rg},

*θ*

_{gb}, and

*θ*

_{rb}, which are the three visual angles between the poles (red–green, green–blue, and red–blue, respectively). The relative magnitudes of angles in the scene is important because changes in a small angle have much more of an impact on performance than the same changes in a wide angle. Finally,

*δ*refers to the disparity between the green pole and its nearest neighbor (see Pickup et al., 2011).

_{α}*γ*error (

*x*-axis). This can also be observed in Figures 4 and 6, in which responses are, on the whole, either slightly closer or further away from the three poles than the goal point, which can be accounted for in this model by allowing the mean proportional error in

*γ*(feature

*f*) to be slightly higher than zero.

_{A}*x*,

*y*) space is not area-preserving. As before, we find the

*total likelihood*of the data under the view-based model by multiplying together the probabilities of all the individual end points recorded for a given participant. An example of the view-based model results being transformed back into a room coordinate frame is shown on the right hand side of Figure 11.

*x*would be quite different for the two samples, but the height of the red curve over all the samples (total likelihood) could well be similar. This intuition is confirmed in the right panel in Figure 10, which shows on the

*y*-axis that the likelihood of samples under the red Gaussian model is very similar whether the samples originate from the red Gaussian model (red dots) or from the blue Gaussian model (blue dots). Of course, the reverse is not true. Sampling from the red distribution yields a very large number of samples that are extremely unlikely under the blue model, pulling down the total likelihoods and making the cloud of samples from the red PDF quite different from those drawn from the blue PDF when assessed under the blue model. The view-based and reconstruction models that we examine show a similar pattern. For each model, we create a reference distribution derived from simulated data sets that are sampled directly from the model predictions. This distribution tells us what kind of likelihoods we would expect if the model used to create the simulated data sets was the “true” underlying model. Hence, we can say whether the likelihood of the experimental data under that model is “typical” or “untypical” in relation to the random samples or, at least, whether it is more typical of one model over another.

*t*

_{data}.” These trials had been chosen in advance as being especially discriminative between the two models (see “Optimisation for model comparison” in Methods). We also show additional analyses for participants S4 and S5 because of concerns that they may have noticed the difference between green-pole-moving and green-pole-static trials and changed their strategy as a result (discussed in “Optimisation for model comparison” in Methods). In this case, we avoided the green-pole-moving trials in the testing phase: The model was trained on two thirds of the green-pole-static data (random split) and tested on the other third (Figure 13).

*t*

_{data}) under each of the models, i.e., both under the model that was used to generate the simulated data set

*and*under the rival model. These two values of

*t*

_{data}give rise to a single point plotted at the relevant coordinate in Figure 12, color-coded according to the model from which the sample was drawn. We repeated this many times (10

^{4}independent samples) for each model under consideration and for each participant.

*t*)/

*n*, under the view-based model and the reconstruction model of samples taken from each model. The red cloud of dots shows likelihoods of simulated data drawn from the reconstruction model, blue dots shows the same for the view-based model. Now we can see where the data fall on this plot. The magenta dot shows the likelihoods of the actual data for each participant under both models.

*p*s < 0.05). Using the same criteria, the data are significantly different from the simulated view-based data sets for only three out of eight participants (S4, S5, and S6). This suggests that the view-based model may be preferable to the reconstruction-based one. However, as discussed above, the real differences between the models emerge when the likelihoods of the data under both models are considered together. In this case, it is clear from inspection of Figure 12 that for all participants except S5, the experimental data (magenta circles) are more similar to the samples from the view-based model (blue dots) than they are to those from the reconstruction model (red dots).

*t*

_{data}points from simulated data sets for both models that fall within a certain radius around the real data. Data from all participants are combined in this plot. There are always more samples from view-based models for every radius in the combined plot. Specifically, we calculate the ratio of reconstruction-based simulations relative to all data sets that fall within a given radius; we then increase the radius up to the point at which it includes either all of the samples from one model or all of the samples from the other model. Before that point, the ratio never exceeds 0.01 for all participants except S4 (

*ratio*= 0.4112), S5 (

*ratio*= 1), and S6 (

*ratio*= 0.1592). After reanalysis, this ratio drops to 0.384 for S4 and to 0.169 for S5 (note that Figure 14 shows the reanalyzed data). If the two models provided equally good descriptions of the data, one would expect (on average) a similar number of samples from each model to lie within the tested region, whatever its radius. If, instead, almost all the samples turn out to come from one model rather than the other, as we have found here (Figure 14), this is strong evidence to prefer that model over its rival.

*not*similar to the likelihood of a random sample taken from the 3-D reconstruction model.

*2-D*location of image features and could be described as a hybrid or intermediate model between a view-based and a 3-D reconstruction model. Models that include view- and reconstruction-based components might perform better in capturing the pattern of navigation errors that we have observed, but we have only explored the two extremes here.

*Annual Reviews Neuroscience*, 20, 303–330.

*Trends in Cognitive Sciences*, 10, 551–557.

*The hippocampal and parietal foundations of spatial cognition*. Oxford, UK: Oxford University Press.

*Journal of Comparative Physiology*, 151, 521–543.

*PLoS One*, 9 (11), e112544.

*The International Journal of Robotics Research*, 30 (9), 1100–1123.

*Proceedings. Ninth IEEE International Conference on computer vision*(pp. 1403–1410). New York: IEEE.

*Journal of Experimental Psychology: Learning, Memory, and Cognition*, 31 (2), 195–215, doi:10.1037/0278-7393.31.2.195.

*Autonomous Robots*, 5 (1), 111–125, doi:10.1145/267658.267687.

*Biological Cybernetics*, 79, 191–202, doi:10.1007/s004220050470.

*Journal of Cognitive Neuroscience*, 10 (4), 445–463.

*Journal of Neuroscience Methods*, 199 (2), 328–335, doi:10.1016/j.jneumeth.2011.05.011.

*Journal of Comparative Physiology A*, 195 (7), 681–689.

*Formica rufa l*.) look at and are guided by extended landmarks.

*Journal of Experimental Biology*, 205, 2499–2509.

*Nature*, 436, 801–806.

*Multiple view geometry in computer vision*(2nd ed.). Cambridge, UK: Cambridge University Press.

*Nature Neuroscience*, 16 (9), 1188–1190.

*Perception*, 31 (12), 1467–1475.

*The Annals of Mathematical Statistics*, 22 (1), 79–86.

*Perception*, 28 (11), 1311–1328.

*Proceedings of the National Academy of Sciences, USA*, 107 (37), 16348–16353.

*Image and Vision Computing*, 27 (11), 1658–1670.

*Science*, 268, 569–573.

*Nature Reviews Neuroscience*, 7, 663–678.

*Spatial Cognition VI, LNAI 5248*(pp. 344–360). Berlin Heidelberg, Germany: Springer-Verlag.

*Robotics and Automation (ICRA), 2013 IEEE International Conference on, IEEE*(pp. 5717–5723). New York: IEEE.

*Journal of Experimental Psychology: Learning, Memory and Cognition*, 32 (6), 1274–1290.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*, 31 (12), 2158–2167.

*The hippocampus as a cognitive map*. Oxford, UK: Oxford University Press.

*Biological Cybernetics*, 107 (4), 449–464.

*PLoS One*, 9 (11), e112793.

*Environment and Behavior*, 12 (1), 65–79.

*Environment and Behavior*, 12 (2), 167–182.

*Hippocampus*, 23 (6), 515–528.

*Nature*, 394, 887–891.

*PLoS One*, 7 (3), e33782, doi:10.1371/journal.pone.0033782.

*Proceedings of the National Academy of Sciences, USA*, 108 (3), 1152–1157, doi:10.1073/pnas.1011843108.

*Psychological Review*, 55 (4), 189–208, doi:10.1037/h0061626.

*Spatial Cognition and Computation*, 2, 333–354, doi:10.1023/A:1015514424931.

*Perception & Psychophysics*, 33 (2), 113–120.

*Experientia*, 35, 1569–1571.

*Proceedings of robotics: Science and systems*. Presented July 13–17, 2015, Rome, Italy, doi:10.15607/RSS.2015.XI.001.

*Journal of Experimental Psychology: Human Perception and Performance*, 34 (5), 1150–1164.