August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Frequency-tagging EEG stimulation reveals integration of facial parts into a unified perceptual representation
Author Affiliations
  • Adriano Boremanse
    Institut of Psychology, department of cognitive neurosciences, University o f Louvain
  • Anthony Norcia
    Psychology faculty, Stanford University
  • Bruno Rossion
    Institut of Psychology, department of cognitive neurosciences, University o f Louvain
Journal of Vision August 2012, Vol.12, 1169. doi:10.1167/12.9.1169
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adriano Boremanse, Anthony Norcia, Bruno Rossion; Frequency-tagging EEG stimulation reveals integration of facial parts into a unified perceptual representation. Journal of Vision 2012;12(9):1169. doi: 10.1167/12.9.1169.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The human face is the perfect example of a Gestalt, a visual stimulus for which the whole is more than the sum of the parts. While there is ample evidence for interactivity of processing among facial parts - the processing of a given part of a face being influenced by the identity and position of the other facial parts - direct evidence for the integration of face parts into a unified representation is still lacking. Here we investigated this issue by means of the frequency tagging stimulation technique (Regan & Heron, 1969) applied to facial parts. High density (128 channels) scalp electroencephalogram (EEG) was recorded in 15 participants presented with a composite face whose top and bottom parts’ were contrast-modulated at different frequency rates (5.87 and 7.14 Hz, counterbalanced). The same composite face was presented throughout 70 sec sequences while participants had to detect colour changes on a fixation cross below the eyes. Responses at several harmonic frequencies were recorded over the visual cortex. Most importantly, intermodulation components (F1+F2= 13.01 Hz; F1-F2: 1.26 Hz), reflecting the interaction of the two input frequencies were observed. While the response to fundamental frequencies remained unchanged or even increased following inversion and spatial misalignment of face parts, the amplitude of the IM components largely decreased in these conditions. A second study (15 participants) controlling for border effects between the aligned and misaligned conditions rendered the same results, that is, a strong and specific decrease of the IMs after misalignment and inversion of the face stimulus. Altogether, these observations provide the first objective trace of a unified face representation in the human brain and of its disruption by spatial misalignment of facial halves and inversion of the whole stimulus, two common manipulations used in the field of face processing to disrupt holistic face perception.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×