August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
A neural model of distance-dependent percept of object size constancy
Author Affiliations
  • Jiehui Qian
    Department of Psychology, Sun Yat-Sen University
  • Arash Yazdanbakhsh
    Center for Computational Neuroscience and Neural Technologies, Program in Cognitive and Neural Systems, Boston University
Journal of Vision August 2014, Vol.14, 1187. doi:https://doi.org/10.1167/14.10.1187
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jiehui Qian, Arash Yazdanbakhsh; A neural model of distance-dependent percept of object size constancy . Journal of Vision 2014;14(10):1187. https://doi.org/10.1167/14.10.1187.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Size constancy is one of the well-known perceptual phenomena that demonstrates perceptual stability to account for the effect of viewing distance on retinal image size. Although theories involving distance scaling to achieve size constancy have flourished based on psychophysical studies, its underlying neural mechanisms remain unclear. Recently, single cell recordings show that distance-dependent size tuned cells are common along the ventral stream, originating from V1, V2, and V4 leading to IT (Dobbins et al., 1998). In addition, fMRI studies demonstrate that an object's perceived size, associated with its perceived egocentric distance, modulates its retinotopic representation in V1 (Murray et al., 2006; Sperandio et al., 2012). These results suggest that V1 contributes to size constancy, and its activity is possibly regulated by feedback of distance information from other brain areas. Here, we propose a neural model based on these findings. A population of gain-modulated MT neurons integrate horizontal disparity (arising from V1) and vergence (arising from FEF) to construct a three-dimensional spatial representation in area LIP. Disparity selective cells in V1 are gain-modulated and simulated by gaussian functions, vergence selective cells in FEF are simulated by sigmoidal functions. Cells in MT integrate the outputs both from V1 and FEF cells by means of a set of basis functions; the outputs of MT cells feed forward to cells in LIP to construct a distance map. The LIP neurons send feedback of distance information to MT to obtain a distance scaling function, and then further back to V1 to modulate the activity of size tuned cells, resulting a spread of V1 cortical activity. This process provides V1 with distance-dependent size representations. The model supports that size constancy is preserved by scaling retinal image size to compensate for changes in perceived distance, and suggests a neural circuit capable of implementing this process.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×