Abstract
One of the longest-standing questions in object recognition is how malleable object representations are to top-down cognitive factors. While behavioral studies suggest that knowledge, experience and expectations modulate the representation of objects, less is known about how the neural object responses are modulated by such top-down factors. Here, we studied how the observer’s current goals modulate cortical object representations by examining neural responses elicited by the same visual image under multiple types of tasks. Specifically, we examined whether the current task could be decoded from the multivoxel response patterns in object-selective cortex. In a fully-interleaved event-related fMRI design, participants categorized objects from eight categories (e.g. cows, flowers, motorbikes) under six different tasks. Half the tasks required stimulus-based perceptual judgments (e.g. stimulus orientation) and half required more conceptually-based judgments (e.g. animacy). Critically, all images were presented in all tasks, circumventing any associations between task and visual information. We found that both the response magnitude and response patterns in object-selective cortex differed between tasks, particularly between the semantic and perceptual tasks. In particular, response magnitude was higher in the perceptual relative to the semantic tasks (both types of task equated for difficulty) and the two types of tasks could be decoded based on their response patterns. Further, while individual perceptual tasks were highly discriminable, the semantic tasks all elicited similar patterns of response. Looking at the whole brain activity, different tasks had distinct signatures, with activity spanning frontal and parietal as well as occipito-temporal regions. While the semantic tasks elicited strong activation in lateral frontal cortex, the perceptual tasks predominantly engaged more posterior regions. These findings provide initial evidence that object representations in object selective cortex are flexible and depend on task. However, the type of task matters: whereas stimulus-based tasks were clearly distinguished, the conceptually-based tasks did not elicit differential representations.
Meeting abstract presented at VSS 2012