Abstract
We have shown that, in a sparse cue environment, small changes in scene layout can significantly affect the precision with which observers can return to a previously-viewed location (Pickup, L.C., Fitzgibbon, A.W. and Glennerster, A. (2013) Biological Cybernetics, 107, 449-464). The scene consisted of three very long vertical poles viewed from one of three locations, with stereo and motion parallax cues available. The participants were transported (virtually) to a different part of the scene and had to return to the original location. The spread of errors varied systematically with the configuration of poles in this sparse scene. There was no floor or background and no informative size cues (the poles were one pixel wide), so the only visual cues for determining scene layout and observer location were the changing angles (and binocular disparity) between the poles as the observer moved. We have developed a model of navigation based on 3D reconstruction of the scene (Pickup et al, 2013) and a quite different type of model based on matching 'view-based' parameters at the estimated 'home' location (Pickup, L.C., Fitzgibbon, A.W., Gilson, S.J., and Glennerster, A. (2011) IVMSP Workshop, 2011 IEEE 10th 135-140). Here, we make an explicit comparison between the two types of models. Likelihoods of the data fall within the distribution of likelihoods sampled from the view-based model but not from the 3D reconstruction model. We have repeated the navigation experiment in a rich-cue environment so that the same vertical poles are now viewed in a room with a floor, furniture and paintings on the wall. The variance of homing data and the dependence on scene structure is significantly reduced in the rich-cue condition making it much harder to discriminate rival models.
Meeting abstract presented at VSS 2016