September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Brain networks dynamically represent and transfer behaviorally-relevant face and object features but quickly reduce them when they are behaviorally-irrelevant
Author Affiliations & Notes
  • Yaocong Duan
    University of Glasgow
  • Robin Ince
    University of Glasgow
  • Joachim Gross
    York Biomedical Research Institute, University of York, UK
  • Philippe Schyns
    University of Glasgow
  • Footnotes
    Acknowledgements  P.G.S. received support from the Wellcome Trust (Senior Investigator Award, UK; 107802) and the MURI/Engineering and Physical Sciences Research Council (USA, UK; 172046-01). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Journal of Vision September 2021, Vol.21, 2178. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yaocong Duan, Robin Ince, Joachim Gross, Philippe Schyns; Brain networks dynamically represent and transfer behaviorally-relevant face and object features but quickly reduce them when they are behaviorally-irrelevant. Journal of Vision 2021;21(9):2178.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

To categorize visual inputs, brain networks selectively attend to, represent and transfer stimulus features that are behaviorally relevant, and likely reduce those that are not. Where, when and how these information processes happen remains largely unknown, in part because the specific face, object or scene feature that the brain (and Deep) networks process for task behavior remain themselves unknown, disabling information processing accounts. Here, we demonstrate that brain networks dynamically represent and transfer face and object features when they are relevant for task-behavior, but reduce them when they are not. Ten participants each applied four categorization tasks (in different sessions) to the same stimulus set, to isolate task effects on face and object features and control likely low-level confounds (e.g. when contrasting images of face vs. object). Each 2 AFC task involved the same pictures of a realistic, typical city scene comprising varying targets: a centrally positioned face (male vs. female; happy vs. neutral), left flanked by a pedestrian (male or female), right flanked by a parked vehicle (car vs. SUV). Each trial started with image pixels randomly sampled with Bubbles, that each participant categorized while we recorded their MEG activity (on 12,773 voxels) and behavior. Independently for each participant (9 replications), information theoretic quantities revealed (1) the image features relevant for each categorization task, (2) their representation and transfer (post ~120 ms) from occipital cortex into the ventral and parietal pathways when they are relevant for behavior (e.g. parked vehicle in car vs. SUV) but (3) their rapid (before 170 ms) reduction in occipital cortex when they are task-irrelevant (e.g. parked vehicle in male vs. female face). This approach psychophysically grounded into the processing of behaviorally relevant information better realizes the elusive promise of neuroimaging by providing novel insights into the information processing algorithms of the brain.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.