Abstract
A key challenge in neuroscience remains to understand, from a systems perspective, where, when and how brain networks represent and dynamically transform sensory information for behavior. To address this challenge, we set up a visual decision task that requires brain networks to resolve the basic algorithmic functions of computation theory--ie. XOR (N = 5), AND (N = 5) and OR (N = 5). We separated the two visual inputs in space (so each initially projected in contra-lateral occipital cortex) and time (with a 1 s delay between each input) to constrain their subsequent representation, transfer and integration across hemispheres for decision behavior. Each trial started with one lateralized input (ie. dark or clear lens of a pair of glasses on a face) when the other input was greyed out for 1s, following which this second input became dark or clear. Using linear regressions, we decomposed concurrently recorded 248-sensor magnetoencephalography (source localized with an LCMV beamformer) into different systems-level dynamic processes that (a) linearly represent and transfer the left and right visual inputs (ie. left and right dark vs. clear lens) and then (b) nonlinearly integrate them for task decisions to finally (c) influence behavioural reaction times (RT). Next, we investigated task-specific strategies. Specifically, the first input can logically disclose the AND and OR response on trials when it is clear in AND and dark in OR, whereas XOR always requires integration of both inputs. RTs were indeed shorter when the first input disclosed the task, but neural synergistic interactions (ie. input integration) were stronger when both inputs had to be integrated (i.e. on trials when the first input did not logically disclose the AND and OR tasks). Finally, this difference in synergistic integration related to behavior on trial-by-trial basis in the AND and OR, but not XOR tasks.