August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Frequency coding of facial parts
Author Affiliations
  • Nicolas Dupuis-Roy
    Université de Montréal, Canada
  • Daniel Fiset
    Université du Québec en Outaouais, Canada
  • Kim Dufresne
    Université de Montréal, Canada
  • Frédéric Gosselin
    Université de Montréal, Canada
Journal of Vision August 2014, Vol.14, 129. doi:10.1167/14.10.129
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicolas Dupuis-Roy, Daniel Fiset, Kim Dufresne, Frédéric Gosselin; Frequency coding of facial parts. Journal of Vision 2014;14(10):129. doi: 10.1167/14.10.129.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How the brain samples spatiotemporal signals in order to form an accurate representation of its environment has been a long-standing issue in cognitive neuroscience. One hypothesis that has gained interest over the years is that the brain samples visual information through periodic and transient processes (see Tallon-Baudry & Bertrand, 1999; VanRullen & Koch, 2003; VanRullen & Dubois, 2011). Although traces of oscillatory processes have been repeatedly found in psychophysical experiments since the middle of the last century, efforts to map their frequency to specific aspects of visual processing remain elusive. Here, we attempted at filling this gap. One hundred and twelve participants did 900 trials of a face gender categorization task in which the achromatic and isoluminant chromatic content of faces were sampled in space and time with 3D gaussian apertures, i.e. Bubbles (see Gosselin & Schyns, 2001). This reverse correlation technique first allowed us to find that the achromatic information in the eyes, and the isoluminant chromatic information in the mouth and right eye regions were the most useful for this task. Next, time-frequency wavelet transforms were performed on the time series recorded in these anatomical facial regions to assess the frequency and latency at which they were sampled. The results showed that achromatic and isoluminant chromatic information within the same facial part were sampled at the same frequency (but at different latencies), whereas different facial parts were sampled at distinct frequencies (ranging from 6 to 10 Hz). This encoding pattern is consistent with recent electrophysiological evidence suggesting that facial features are ‘multiplexed’ by the frequency of transient synchronized oscillations in the brain (see Schyns, Thut & Gross, 2011; Smith, Gosselin & Schyns, 2005, 2006, 2007; Thut et al., 2011; Romei, Driver, Schyns & Thut, 2011).

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×