Abstract
Actions change the appearance of objects in systematic ways, such as when opening a box or biting an apple. In a previous study, we showed that the medial temporal lobe (MTL) binds together different states of an object that are connected by such actions. In the current study, we use high-resolution fMRI to investigate how these object representations in the MTL may in turn license perceptual prediction when one object state is cued and a predictive action is executed. The study began with associative training in which cue stimuli appeared individually and subjects pressed a button to transform the cue into an outcome stimulus. For some cues ("strong coupling"), one outcome appeared when the left button was pressed and a different outcome appeared when the right button was pressed. For other cues ("weak coupling"), both outcomes appeared with equal probability irrespective of which button was pressed. Replicating our prior work, different action-outcome transitions for a given strong (but not weak) coupling cue were represented more similarly to one another in the MTL. To examine how this learning affected perception, we measured the extent to which outcomes were predictively instantiated in visual cortex when the corresponding cue-action transition occurred. To disentangle voxel activity patterns specific to cue-action transitions from patterns for the outcomes, we included trials in which each cue-action transition was followed by blank screen instead of an outcome, and trials in which each outcome appeared in isolation without a preceding cue and action. For strong (but not weak) coupling trials with a blank screen, voxel patterns in early visual cortex were more similar to the pattern elicited by the corresponding outcome than to patterns of unassociated but equally familiar outcomes. This suggests that object representations in the MTL may be a source of predictive coding in visual cortex.
Meeting abstract presented at VSS 2014