Purchase this article with an account.
Laurent Caplette, Karim Jerbi, Frédéric Gosselin; Teasing apart the extraction and the processing of visual information in the brain. Journal of Vision 2017;17(10):972. doi: 10.1167/17.10.972.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Humans have a limited cognitive capacity; hence, when recognizing an object or a face, they must extract different features at different moments. As such, the neuronal response to a given feature comprises responses from moments when it was attended and responses from moments when it was unattended. Responses associated with different « extraction moments » could also be different because information extracted earlier might be accumulated longer. In the present study, observers were shown faces in which the eyes and mouth were sampled at random moments during a 200ms period. They had to categorize the gender of the face while their EEG activity was recorded. To uncover activity associated with the presentation of a given feature at a given moment, we performed multiple linear regressions between feature x presentation moment sampling planes across trials and EEG activity for a given sensor at a given time point across trials. When combining responses to a given feature across all presentation moments, we reproduced the classical N170 component on parieto-occipital sensors. When breaking down this activity in responses to different presentation moments, we uncovered a highly different activity (effect of presentation moment peaking at 84 and 332ms after feature presentation, Fmax=13.58, p< .001). This indicates that information extracted at different moments is not processed in the same way and that the N170 is the result of different computations. Interestingly, this effect is not present in lower occipital sensors, which process information in the same way independently of presentation moment (p>.25; interaction between sensor and presentation moment around 328 ms, Fmax=9.15, p< .05). Our novel method allows to better understand the dynamics of visual information in the brain, from feature extraction to object recognition. Further analyses involving directional connectivity should allow us to detect the presence of gating and accumulators in the brain.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only