December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Categorization-dependent dynamic representation, selection and reduction of stimulus features in brain networks
Author Affiliations & Notes
  • Yaocong Duan
    School of Psychology and Neuroscience, University of Glasgow
  • Robin Ince
    School of Psychology and Neuroscience, University of Glasgow
  • Joachim Gross
  • Philippe Schyns
    School of Psychology and Neuroscience, University of Glasgow
  • Footnotes
    Acknowledgements  P.G.S. received support from the Wellcome Trust (Senior Investigator Award, UK; 107802) and the MURI/Engineering and Physical Sciences Research Council (USA, UK; 172046-01). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Journal of Vision December 2022, Vol.22, 3295. doi:https://doi.org/10.1167/jov.22.14.3295
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yaocong Duan, Robin Ince, Joachim Gross, Philippe Schyns; Categorization-dependent dynamic representation, selection and reduction of stimulus features in brain networks. Journal of Vision 2022;22(14):3295. https://doi.org/10.1167/jov.22.14.3295.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A single image can afford multiple categorizations, each resulting from brain networks specifically processing the features relevant to each task. To understand where, when and how brain networks selectively process these features, our experiment comprised four different 2-Alternative-Forced-Choice (2-AFC) categorizations of the same original 64 images of a realistic city street comprising varying embedded targets: a central face (male vs. female; happy vs. neutral), left flanked by a pedestrian (male vs. female), right flanked by a parked vehicle (car vs. SUV). Bubbles randomly sampled each image to generate 768 stimuli. In a within-participant design (N = 10), each performed the four tasks in four blocks on the same 768 stimuli twice repeated in a random order. We concurrently recorded their categorization responses and source-localized MEG activity. We reconstructed the features each participant used in each task--computed as Mutual Information(Pixel visibility; Correct vs. Incorrect). We show (1) that each task incurs usage of task-specific features in each participant (e.g. body part in pedestrian gender vs. vehicle component in vehicle) and (2) that even the same categorization (e.g. pedestrian gender) incurs usage of different features across participants (e.g. upper vs. lower body parts). Critically, brain networks adaptively changed their representations of the same features into the activity of the same MEG sources when the task makes them relevant, or not--computed as Synergy(Feature visibility; MEG; Categorization tasks). When task-relevant, features are each selected from occipital to higher cortical regions for categorization; When task-irrelevant, each is quickly reduced into occipital cortex [<170ms]. Reconstructed network connectivity shows communications of only task-relevant features from sending occipital cortex [50-100 ms] to receiving right Fusiform Gyrus [100-130 ms]. All results are replicated in 10/10 participants to uniquely demonstrate where, when and how their brain networks dynamically select vs. reduce stimulus features to accomplish multiple categorization behaviors.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×