September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Decoding real-world visual recognition abilities in the human brain
Author Affiliations & Notes
  • Simon Faghel-Soubeyrand
    Université de Montréal
    University of Birmingham
  • Meike Ramon
    Université de Fribourg
  • Eva Bamps
    KU Leuven
  • Matteo Zoia
    Université de Fribourg
  • Jessica Woodhams
    University of Birmingham
  • Arjen Alink
    University Medical Center Hamburg-Eppendorf
  • Frédéric Gosselin
    Université de Montréal
  • Ian Charest
    University of Birmingham
  • Footnotes
    Acknowledgements  This work was supported by a NSERC and Mitacs scholarships to S. F-S, by a Swiss National Science Foundation PRIMA (Promoting Women in Academia) grant (PR00P1_179872) to M.R., by an ESRC IAAA grant to S. F-S, J. W and I. C, a NSERC Discovery grant to F. G, and an ERC-StG to I. C.
Journal of Vision September 2021, Vol.21, 2604. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Simon Faghel-Soubeyrand, Meike Ramon, Eva Bamps, Matteo Zoia, Jessica Woodhams, Arjen Alink, Frédéric Gosselin, Ian Charest; Decoding real-world visual recognition abilities in the human brain. Journal of Vision 2021;21(9):2604.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

The typical human visual system is able to decipher information about the visual world with impressive efficiency and speed. But not all individuals are equally competent at recognising what is presented to their eyes. Critically, very little is known about the brain mechanisms behind variations in recognition abilities. Here, we ask if interindividual variation in face cognition can be accurately “read” from brain activity, and use computational models to characterise the underlying brain mechanisms. We recorded high-density electroencephalography (EEG) in typical (n=17) and “super-recogniser” participants (n=16; individuals in the top 2% of face-recognition ability spectrum) while they were presented with images of faces, objects, animals, and scenes. Relying on more than 100,000 trials, we trained linear classifiers to predict whether trial-by-trial brain activity belonged to an individual from the “super” or “typical” recogniser group. Significant decoding of group-membership was observed from ~85ms, peaking within the N170 window, and spreading well after stimulus offset (>500ms). Using fractional ridge regression, we extended these findings by predicting individual ability scores from EEG in similar time-windows. Both results held true when decoding from face or non-face stimuli. To better understand the brain mechanisms behind these variations, we used representational similarity analysis and computational models that characterise visual (convolutional neural networks trained on object recognition; CNNs) and semantic processing (deep averaging network trained on sentence embeddings). This computational approach uncovered two representational signatures of higher face-recognition ability: mid-level visual computations (representations within the N170 window and mid-layers of CNNs) and high-level semantic computations (representations within the P600 window and the semantic model). Altogether, our results indicate that an individual’s ability to identify faces is supported by domain-general brain mechanisms distributed across several information processing steps, from low-level feature integration to high-level semantic processing, in the brain of the beholder.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.