September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Occipital and parietal cortex encode representations of match between a viewed and sought object during visual target search
Author Affiliations
  • Margaret Henderson
    University of California, San Diego
  • John Serences
    University of California, San Diego
Journal of Vision August 2017, Vol.17, 1136. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Margaret Henderson, John Serences; Occipital and parietal cortex encode representations of match between a viewed and sought object during visual target search. Journal of Vision 2017;17(10):1136.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

During a visual target search task, a stored representation of a search object is continually compared to a sensory representation of a currently-viewed visual scene. This comparison is likely to be performed through feedback modulation of sensory responses in higher visual areas, resulting in a representation of a variable encoding the conjunction between the viewed and sought objects (i.e. a "match" or decision signal) (Pagan, Urban, Wohl, & Rust, 2013). To investigate the evolution of this "match" representation in human cortex, we trained subjects to perform a visual matching task using Fribble object stimuli. A set of 8 objects was used for both the sought and viewed object, so that each of 64 possible sought object/viewed object combinations was sampled. We used multiband 3T BOLD fMRI to record changes in activation in visual occipital and parietal cortex while subjects performed this task. We used a linear support vector machine classifier, trained on activation patterns in each independently-defined ROI, to decode the identity of the viewed object, as well as the presence of a match between the viewed and sought objects. Within several regions of early visual cortex, we found that decoding of the viewed object was above chance, while decoding of an item's status as a match was at chance levels. In contrast, in several higher visual regions in ventral and lateral occipital cortex, we found that the viewed object could not be decoded, but the presence or absence of a match could be decoded with above-chance accuracy. These results suggest a transition across the ventral visual stream from predominantly stimulus-driven representations to abstract representations of task variables.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.