August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
A neural model of border-ownership and motion in early vision
Author Affiliations
  • Arash Yazdanbakhsh
    Program in Cognitive and Neural Systems, Boston University\nCenter for Computational Neuroscience and Neural Technology, Boston University
  • Oliver Layton
    Program in Cognitive and Neural Systems, Boston University\nCenter for Computational Neuroscience and Neural Technology, Boston University
  • Ennio Mingolla
    Center for Computational Neuroscience and Neural Technology, Boston University\nPsychology Department, Boston University
Journal of Vision August 2012, Vol.12, 759. doi:10.1167/12.9.759
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Arash Yazdanbakhsh, Oliver Layton, Ennio Mingolla; A neural model of border-ownership and motion in early vision. Journal of Vision 2012;12(9):759. doi: 10.1167/12.9.759.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although models of figure-ground segregation often consider static scenes (Craft et al. 2007, J. Neurophysiology), the human visual system constantly deals with changes in light due to independently moving objects (IMOs), self-motion, and eye movements. Neurons (B cells) have been identified in primate visual cortex that are selective for border-ownership and integrate figural information from far outside their classical receptive fields (Zhou et al. 2000, J. Neuroscience). While there are often many sources of border-ownership information, humans perceive figure-ground relationships in moving random dot displays without structured patterns of luminance (Kaplan 1969, Perception & Psychophysics). Considering these challenges and spatio-temporal physiological properties of early visual areas, we developed a neural model of border-ownership to better understand figure-ground segregation in moving displays. Model LGN transient cells with different conduction delays (Maunsell et al. 1999, Visual Neuroscience) spatially integrate moving random dot input. We introduce units that detect spatio-temporal correlations independent of luminance magnitude by multiplicatively combining convergent LGN signals onto model V1 cells. Units compete across possible correlations in a recurrent competitive field configured as a winner-take-all network to locally determine the dominant direction of coherent motion. Grouping cells with larger receptive fields than model LGN and V1 units dynamically feed back to bias B cells in model V1/V2. Our model determines border-ownership signals at the edges of moving random dot and IMO displays due to the changes in spatio-temporal correlation separating regions perceived as figure and ground. Unlike other models that subtract different velocities to extract motion edges, our model results do not require the use of differential motion and predict that figure-ground distinctions could emerge shortly after LGN signals converge onto V1 cells. Consistent with Rucci et al. (2007, Nature), the model results predict that fixational eye movements enhance spatial contrast sensitivity, which is useful for determining figure-ground relationships.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×