October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
A homing task that could not be done by image matching.
Author Affiliations & Notes
  • Maria Elena Stefanou
    University of Reading
  • Alexander Muryy
    University of Reading
  • Andrew Glennerster
    University of Reading
  • Footnotes
    Acknowledgements  Funded by EPSRC/Dstl EP/N019423/1
Journal of Vision October 2020, Vol.20, 396. doi:https://doi.org/10.1167/jov.20.11.396
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maria Elena Stefanou, Alexander Muryy, Andrew Glennerster; A homing task that could not be done by image matching.. Journal of Vision 2020;20(11):396. https://doi.org/10.1167/jov.20.11.396.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Returning to a previously visited location (‘home’) could be done by image matching or by 3D reconstruction of the scene. We have shown that participants’ errors are better predicted by image-matching but here we restrict participants’ views to prevent them using this strategy. In the learning phase, participants in immersive virtual reality viewed a naturalistic indoor scene from one zone (binocular vision and limited head movements) with a restricted field of view (90 degree cone) and only one viewing direction permitted (e.g. North). After participants became familiar with the view, the cyclopean point was briefly frozen with respect to the scene (definition of ‘home’). Participants were then teleported to another location and had to return to ‘home’ (search phase). Again, the FOV was restricted, but the direction could be 0, 90 or 180 degrees different from the learning phase. The learning-phase view was always towards the centre of the room and participants had a sufficient view of objects in both the learning and search phases to ensure that the task was always possible. Participants’ errors (RMSE of reported location relative to ‘home’) increased as a function of the angle between the learning and search phase viewing directions. When the search phase orientation differed by 90 or 180 degrees, the reported location was systematically shifted in the direction of the view in the search phase (GROUP: p < .0001). The fact that participants are able to return relatively close to ‘home’ rules out (by design) the hypothesis that they are using an image-matching strategy to solve the task. On the other hand, a 3D reconstruction hypothesis does not predict these systematic biases. Any image-based strategy that could explain these data would need to rely on something like the latent space interpolation that has been so successful in generative adversarial networks (GANs).


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.