Abstract
During a visual target search task, a stored representation of a search object is continually compared to a sensory representation of a currently-viewed visual scene. This comparison is likely to be performed through feedback modulation of sensory responses in higher visual areas, resulting in a representation of a variable encoding the conjunction between the viewed and sought objects (i.e. a "match" or decision signal) (Pagan, Urban, Wohl, & Rust, 2013). To investigate the evolution of this "match" representation in human cortex, we trained subjects to perform a visual matching task using Fribble object stimuli. A set of 8 objects was used for both the sought and viewed object, so that each of 64 possible sought object/viewed object combinations was sampled. We used multiband 3T BOLD fMRI to record changes in activation in visual occipital and parietal cortex while subjects performed this task. We used a linear support vector machine classifier, trained on activation patterns in each independently-defined ROI, to decode the identity of the viewed object, as well as the presence of a match between the viewed and sought objects. Within several regions of early visual cortex, we found that decoding of the viewed object was above chance, while decoding of an item's status as a match was at chance levels. In contrast, in several higher visual regions in ventral and lateral occipital cortex, we found that the viewed object could not be decoded, but the presence or absence of a match could be decoded with above-chance accuracy. These results suggest a transition across the ventral visual stream from predominantly stimulus-driven representations to abstract representations of task variables.
Meeting abstract presented at VSS 2017