Abstract
Our actions can determine visual input by changing the state of objects in the environment. Because the future state of an object is often predictable based on its current state and a planned action, such actions provide a rich source of perceptual expectation. We used fMRI and a novel training paradigm to test whether associative learning can induce predictive coding in visual cortex of what an object will look like after an action. Action-outcome training consisted of an exploratory phase and a directed phase. Each trial of the exploratory training phase began with a stimulus cue (fractal pattern) in the middle of a computer screen, and the subject chose to press the left or right button in order to replace the cue with an outcome fractal. For each cue, a particular outcome appeared when the left button was pressed and a different outcome appeared when the right button was pressed. The directed training phase was used to balance the frequency of specific cue-outcome transitions, with arrows appearing below fractal cues to prompt left and right responses. After both training phases, we used fMRI to assess the neural correlates and consequences of predictive coding for outcomes based on actions in the scanner. As during directed training, each trial in the scanner included three parts: a cue fractal, an arrow prompt to quickly press the corresponding left or right button, and then an outcome fractal. We compared trials in which the outcome was predictable given the cue and action combination, to trials in which the outcome was associated with the cue but only expected after the other action, and found that the BOLD response in object-selective visual cortex was affected by learned action contingencies. Actions and intentions may thus adaptively influence visual perception of objects via predictive coding of forthcoming object states.
Meeting abstract presented at VSS 2013