August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
A top-down attentional network selects vs. reduces the same features for different visual categorizations of the same scenes
Author Affiliations & Notes
  • Yaocong Duan
    School of Psychology and Neuroscience, University of Glasgow
  • Robin Ince
    School of Psychology and Neuroscience, University of Glasgow
  • Joachim Gross
    Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Germany
  • Philippe Schyns
    School of Psychology and Neuroscience, University of Glasgow
  • Footnotes
    Acknowledgements  P.G.S. was supported by the EPSRC [MURI 1720461] and the Wellcome Trust [Senior Investigator Award; 107802]. P.G.S. is a Royal Society Wolfson Fellow [RSWF\R3\183002]. R.A.A.I. was supported by the Wellcome Trust [214120/Z/18/Z].
Journal of Vision August 2023, Vol.23, 5287. doi:https://doi.org/10.1167/jov.23.9.5287
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yaocong Duan, Robin Ince, Joachim Gross, Philippe Schyns; A top-down attentional network selects vs. reduces the same features for different visual categorizations of the same scenes. Journal of Vision 2023;23(9):5287. https://doi.org/10.1167/jov.23.9.5287.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Selective attention enables the brain to efficiently cope with overwhelming amounts of visual information by selecting only that which is relevant to the categorization task at hand while ignoring irrelevant inputs. Here, we study the mechanisms of categorization-dependent attentional selection in an experiment that used 64 images of a realistic city street comprising varying embedded targets: a central face (task 1: “male” vs. “female”; task 2: “happy” vs. “neutral”), left flanked by a pedestrian (task 3: “male” vs. “female”), right flanked by a parked vehicle (task 4: “car” vs. “SUV”). Bubbles randomly sampled each image to generate 768 stimuli. In a within-participant design (N = 10), each performed the four 2-AFC categorization tasks (as listed above) on the same stimulus set. We concurrently recorded their categorization responses and source-localized MEG activity. First, we reconstructed the features each participant used in each task–computed as Mutual Information(Pixel visibility; Correct vs. Incorrect). Then, we traced the dynamic representation of each feature into MEG source responses–computing Mutual Information(Feature visibility; MEG sources)–and examined how different categorization tasks modulates the MEG source representations of the same stimulus features –computing Synergy(Feature visibility; MEG sources; Task). Based on these synergistic interactions, we reconstructed an attentional network that selects task-relevant features and reduces the same features when they are task-irrelevant. Specifically, left dorsal prefrontal and left ventrolateral prefrontal cortex (~80-90ms) interact with ventral and dorsal pathway to change the representation format of the same features depending on task–i.e. opponent format into the amplitude responses of the same source ~ 96-120ms. When task-relevant, features are each selected from occipital to higher cortical regions for categorization; When task-irrelevant, each is quickly reduced into occipital cortex [<170ms].

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×