Abstract
In daily life, humans are constantly required to select behaviorally relevant targets from cluttered and complex environments. In previous neuroimaging studies, the remarkable efficiency of such selection processes has been linked to a selective enhancement of the representation of behaviorally relevant stimulus categories in visual cortex. Although these studies have revealed important insights about the neural basis of real-world search, the temporal unfolding of these effects is still unclear. Here, we recorded MEG activity while participants searched for categorical targets (persons or cars) in real-world scenes. We then tried to recover the presence and location of these two categories within a scene from MEG sensor patterns using multivariate decoding. We found that classifiers trained on patterns evoked by persons and cars in isolation could reliably distinguish between scenes containing persons or cars. Additionally, we were able to decode the location of the behaviorally relevant target category: Classifiers trained on an independent attentional orienting task were able to distinguish whether the target category appeared on the right or on the left side of a scene. Both category and location information could be retrieved already shortly after scene onset, indicating that processing of visual categories in complex scenes is composed of an early boost of category information, accompanied by an attentional orienting towards the relevant target. More generally, our results demonstrate the viability of using MEG to compare representations across different sets of stimuli and tasks to gain insights in the temporal dynamics of visual processing.
Meeting abstract presented at VSS 2015