Abstract
To categorize visual inputs, brain networks selectively attend to, represent and transfer stimulus features that are behaviorally relevant, and likely reduce those that are not. Where, when and how these information processes happen remains largely unknown, in part because the specific face, object or scene feature that the brain (and Deep) networks process for task behavior remain themselves unknown, disabling information processing accounts. Here, we demonstrate that brain networks dynamically represent and transfer face and object features when they are relevant for task-behavior, but reduce them when they are not. Ten participants each applied four categorization tasks (in different sessions) to the same stimulus set, to isolate task effects on face and object features and control likely low-level confounds (e.g. when contrasting images of face vs. object). Each 2 AFC task involved the same pictures of a realistic, typical city scene comprising varying targets: a centrally positioned face (male vs. female; happy vs. neutral), left flanked by a pedestrian (male or female), right flanked by a parked vehicle (car vs. SUV). Each trial started with image pixels randomly sampled with Bubbles, that each participant categorized while we recorded their MEG activity (on 12,773 voxels) and behavior. Independently for each participant (9 replications), information theoretic quantities revealed (1) the image features relevant for each categorization task, (2) their representation and transfer (post ~120 ms) from occipital cortex into the ventral and parietal pathways when they are relevant for behavior (e.g. parked vehicle in car vs. SUV) but (3) their rapid (before 170 ms) reduction in occipital cortex when they are task-irrelevant (e.g. parked vehicle in male vs. female face). This approach psychophysically grounded into the processing of behaviorally relevant information better realizes the elusive promise of neuroimaging by providing novel insights into the information processing algorithms of the brain.