December 2001
Volume 1, Issue 3
Free
Vision Sciences Society Annual Meeting Abstract  |   December 2001
How could ego-centric location be defined neurally?
Author Affiliations
  • A. Glennerster
    University Laboratory of Physiology, Parks Rd, Oxford OX1 3PT, UK
  • M.E. Hansard
    Department of Computer Science, University College, Gower St, London WC1E 6BT, UK
  • A.W. Fitzgibbon
    Department of Engineering, Parks Rd, Oxford OX1 3PJ, UK
Journal of Vision December 2001, Vol.1, 6. doi:https://doi.org/10.1167/1.3.6
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      A. Glennerster, M.E. Hansard, A.W. Fitzgibbon; How could ego-centric location be defined neurally?. Journal of Vision 2001;1(3):6. https://doi.org/10.1167/1.3.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The computation of ego-centric location is often assumed to involve a chain of co-ordinate transformations (eye-, head-, body-, world-centred representations) but there are no detailed proposals about how this might be carried out in the visual system (i.e. a shift in the origin and a rotation of the basis vectors describing an entire scene). Here we present an alternative, which avoids co-ordinate transformations of this type. The model relies on the recognition of lists of objects and the angles required to make saccades from one to the other. The list of currently visible objects surrounding the observer can be split into distant and progressively closer lists of objects. Thus, location is defined hierarchically. As the observer translates, the nearer lists change more rapidly than the distant ones. If the goal is to move between locations defined in this way, we show how retinal flow could have a straight-forward interpretation and that it is positively advantageous for observers to maintain gaze on an object as they translate. We also show how the relationship between locations is affected by the presence of occluding surfaces (such as walls) and how the hierarchy of stored locations could be both progressively extended (to cover wider areas) and refined (to sample space more densely). Eye position signals are not used either in generating or reading out from the representation. The proposed representation is a set of sensory states linked by motor outputs. This is, at least, something we know the visuomotor system can store.

Glennerster, A., Hansard, M., Fitzgibbon, A.(2001). How could ego-centric location be defined neurally? [Abstract]. Journal of Vision, 1( 3): 6, 6a, http://journalofvision.org/1/3/6/, doi:10.1167/1.3.6. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×