September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
A model of optic flow parsing as error in prediction
Author Affiliations
  • Oliver Layton
    Department of Cognitive Science, Rensselaer Polytechnic Institute
  • Brett Fajen
    Department of Cognitive Science, Rensselaer Polytechnic Institute
Journal of Vision August 2017, Vol.17, 424. doi:https://doi.org/10.1167/17.10.424
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Oliver Layton, Brett Fajen; A model of optic flow parsing as error in prediction. Journal of Vision 2017;17(10):424. https://doi.org/10.1167/17.10.424.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

While many models emphasize feedforward processing as a driving factor in visual perception, others have posited a more central role for feedback (Carpenter & Grossberg, 1987; Hochstein & Ahissar, 2002). In particular, predictive coding theories have posited that feedback carries predictions about how sensory signals should appear and feedforward signals transmit the error between the sensory array and prediction (prediction error; Friston, 2010). We propose that this view may shed light on the important problem that the visual system faces as humans move through the environment – that of object motion perception during self-motion. Although the pattern of motion on the retina reflects the sum of self-motion and object motion, humans perceive object motion relative to the stationary world (Rushton & Warren, 2005). This implicates a mechanism whereby the visual system factors out the self-motion component from the retinal optic flow (Layton & Fajen, 2016; Warren & Rushton, 2009). We suggest that world-relative object motion perception could emerge through interactions between areas MT and MST that attempt to minimize the discrepancy between the retinal flow and predicted flow pattern consistent with the observer's self-motion. In our model, MT matches feedforward optic flow signals with feedback signals from MSTd carrying predictions about the expected global motion pattern associated with the observer's self-motion. Sensory signals that match the predicted motion parallax and disparity signals reinforce the self-motion signal in MSTd, and the MT signals that mismatch the feedback from MSTd are suppressed. Because object motion signals naturally deviate from the prediction, MT-MSTd interactions automatically factor out the self-motion component. Hence, world-relative object motion signals emerge as a prediction error. Our model offers a new perspective of how humans perceive world-relative object motion during self-motion and clarifies related problems, such as how observers discern stationary from moving objects.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×