August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Investigating face identity matching and discrimination using event-related steady-state visual evoked potentials
Author Affiliations
  • Joan Liu-Shuang
    Institute of Psychology, Department of Cognitive Neurosciences, University of Louvain
  • Anthony M. Norcia
    Department of Psychology, Stanford University
  • Bruno Rossion
    Institute of Psychology, Department of Cognitive Neurosciences, University of Louvain
Journal of Vision August 2012, Vol.12, 1172. doi:https://doi.org/10.1167/12.9.1172
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joan Liu-Shuang, Anthony M. Norcia, Bruno Rossion; Investigating face identity matching and discrimination using event-related steady-state visual evoked potentials. Journal of Vision 2012;12(9):1172. https://doi.org/10.1167/12.9.1172.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans are very efficient at discriminating and matching highly similar visual patterns such as faces. Yet, the perceptual mechanisms underlying this ability remain unclear. Following the recent application of the steady-state visual evoked potential (SSVEP) method to study individual face perception in a block design (Rossion & Boremanse, 2011, JOV), we extended this method to investigate the encoding of facial identities in an event-related stimulation mode. We recorded high-density EEG (128 channels) in 7 human observers presented with a 60-second sequence of face stimuli shown at a constant, high frequency rate (12.5 Hz, sinusoidal contrast modulation). At fixed intervals (every 4 stimuli, or 12.5 Hz /5 = 2.5 Hz) we introduced either a change (Experiment 1, discrimination) or a repetition (Experiment 2, matching) of facial identity. More precisely, in Experiment 1 (AAAABAAAAC…), different identities (B, C…) appeared at 2.5 Hz, the amplitude at this frequency being taken as an index of identity discrimination. Conversely, in Experiment 2 (ABCDDEFGHH…), the 2.5 Hz repetition of the previous identity is considered to reflect identity matching. Low-level visual differences were controlled by face orientation (upright vs. inverted) and by randomly varying face size at each cycle of the main 12.5 Hz frequency. In both experiments, we found an increase of EEG amplitude and signal-to-noise ratio at 2.5 Hz and its harmonics (2F=5 Hz, 3F=7.5 Hz, 4F=10 Hz), which was localised predominantly over right occipito-temporal electrodes. Importantly, responses were much larger for upright than inverted faces in this area. These findings suggest not only that facial identity could be extracted despite the fast presentation frequency but also that the encoding was of a holistic rather than analytical nature. This demonstration prompts further investigation of face perception with this event-related approach in different human populations, including children and individuals with difficulties in face recognition.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×