July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Action-specific predictive coding of object states
Author Affiliations
  • Nicholas C. Hindy
    Department of Psychology, Princeton University\nPrinceton Neuroscience Institute, Princeton University
  • Nicholas B. Turk-Browne
    Department of Psychology, Princeton University\nPrinceton Neuroscience Institute, Princeton University
Journal of Vision July 2013, Vol.13, 490. doi:https://doi.org/10.1167/13.9.490
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas C. Hindy, Nicholas B. Turk-Browne; Action-specific predictive coding of object states. Journal of Vision 2013;13(9):490. doi: https://doi.org/10.1167/13.9.490.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Our actions can determine visual input by changing the state of objects in the environment. Because the future state of an object is often predictable based on its current state and a planned action, such actions provide a rich source of perceptual expectation. We used fMRI and a novel training paradigm to test whether associative learning can induce predictive coding in visual cortex of what an object will look like after an action. Action-outcome training consisted of an exploratory phase and a directed phase. Each trial of the exploratory training phase began with a stimulus cue (fractal pattern) in the middle of a computer screen, and the subject chose to press the left or right button in order to replace the cue with an outcome fractal. For each cue, a particular outcome appeared when the left button was pressed and a different outcome appeared when the right button was pressed. The directed training phase was used to balance the frequency of specific cue-outcome transitions, with arrows appearing below fractal cues to prompt left and right responses. After both training phases, we used fMRI to assess the neural correlates and consequences of predictive coding for outcomes based on actions in the scanner. As during directed training, each trial in the scanner included three parts: a cue fractal, an arrow prompt to quickly press the corresponding left or right button, and then an outcome fractal. We compared trials in which the outcome was predictable given the cue and action combination, to trials in which the outcome was associated with the cue but only expected after the other action, and found that the BOLD response in object-selective visual cortex was affected by learned action contingencies. Actions and intentions may thus adaptively influence visual perception of objects via predictive coding of forthcoming object states.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.