Abstract
A fundamental challenge in neuroscience is to understand where, when and how brain networks process information. Neuroscientists have approached this question partly by measuring brain activity in space, time and at different levels of granularity. However, rather than measuring brain activity per se, our aim is to understand the specific algorithmic functions that this activity reflects [1-3].
To address this, we studied the XOR primitive, a foundational algorithmic nonlinear function that returns “true” when only one of its inputs is true. To study this transformation in the brain, we manipulated the lenses of a pair of glasses presented to the left and right visual hemifield. Each participant (N = 10) responded “yes” when only one of the lenses was dark, while we recorded their brain activity with MEG (SupMat Fig 1, caption). Other participants performed AND (N = 9) and OR (N = 8) tasks on the same stimuli.
We analyzed the spatiotemporal representation of the binary lens color (dark or clear) to find out where (i.e. which brain regions), when and how (i.e. in MEG voxel activity) each lens is individually represented vs. the two lenses are nonlinearly integrated for decision (SupMat Fig 2, caption). We performed this comparison per participant, on all cortical voxels, 0 to 300 ms post stimulus.
Our analyses reveal the brain as a network of regions that initially (60-100 ms post-stimulus) linearly represents the left and right lenses in the lateral-occipital regions (SupMat Fig 2). Their critical nonlinear integration occurs later (200-300 ms) primarily in the right parietal-temporal cortices, with the explicit representation of XOR, AND or OR functions within the MEG activity (SupMat Fig 3).
To conclude, we can start framing the brain as a network that performs specific algorithmic functions and start understanding the where, when and how of specific information processing.