September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Teasing apart the extraction and the processing of visual information in the brain
Author Affiliations
  • Laurent Caplette
    Département de psychologie, Université de Montréal
  • Karim Jerbi
    Département de psychologie, Université de Montréal
  • Frédéric Gosselin
    Département de psychologie, Université de Montréal
Journal of Vision August 2017, Vol.17, 972. doi:https://doi.org/10.1167/17.10.972
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laurent Caplette, Karim Jerbi, Frédéric Gosselin; Teasing apart the extraction and the processing of visual information in the brain. Journal of Vision 2017;17(10):972. https://doi.org/10.1167/17.10.972.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans have a limited cognitive capacity; hence, when recognizing an object or a face, they must extract different features at different moments. As such, the neuronal response to a given feature comprises responses from moments when it was attended and responses from moments when it was unattended. Responses associated with different « extraction moments » could also be different because information extracted earlier might be accumulated longer. In the present study, observers were shown faces in which the eyes and mouth were sampled at random moments during a 200ms period. They had to categorize the gender of the face while their EEG activity was recorded. To uncover activity associated with the presentation of a given feature at a given moment, we performed multiple linear regressions between feature x presentation moment sampling planes across trials and EEG activity for a given sensor at a given time point across trials. When combining responses to a given feature across all presentation moments, we reproduced the classical N170 component on parieto-occipital sensors. When breaking down this activity in responses to different presentation moments, we uncovered a highly different activity (effect of presentation moment peaking at 84 and 332ms after feature presentation, Fmax=13.58, p< .001). This indicates that information extracted at different moments is not processed in the same way and that the N170 is the result of different computations. Interestingly, this effect is not present in lower occipital sensors, which process information in the same way independently of presentation moment (p>.25; interaction between sensor and presentation moment around 328 ms, Fmax=9.15, p< .05). Our novel method allows to better understand the dynamics of visual information in the brain, from feature extraction to object recognition. Further analyses involving directional connectivity should allow us to detect the presence of gating and accumulators in the brain.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×