Abstract
How the brain samples spatiotemporal signals in order to form an accurate representation of its environment has been a long-standing issue in cognitive neuroscience. One hypothesis that has gained interest over the years is that the brain samples visual information through periodic and transient processes (see Tallon-Baudry & Bertrand, 1999; VanRullen & Koch, 2003; VanRullen & Dubois, 2011). Although traces of oscillatory processes have been repeatedly found in psychophysical experiments since the middle of the last century, efforts to map their frequency to specific aspects of visual processing remain elusive. Here, we attempted at filling this gap. One hundred and twelve participants did 900 trials of a face gender categorization task in which the achromatic and isoluminant chromatic content of faces were sampled in space and time with 3D gaussian apertures, i.e. Bubbles (see Gosselin & Schyns, 2001). This reverse correlation technique first allowed us to find that the achromatic information in the eyes, and the isoluminant chromatic information in the mouth and right eye regions were the most useful for this task. Next, time-frequency wavelet transforms were performed on the time series recorded in these anatomical facial regions to assess the frequency and latency at which they were sampled. The results showed that achromatic and isoluminant chromatic information within the same facial part were sampled at the same frequency (but at different latencies), whereas different facial parts were sampled at distinct frequencies (ranging from 6 to 10 Hz). This encoding pattern is consistent with recent electrophysiological evidence suggesting that facial features are ‘multiplexed’ by the frequency of transient synchronized oscillations in the brain (see Schyns, Thut & Gross, 2011; Smith, Gosselin & Schyns, 2005, 2006, 2007; Thut et al., 2011; Romei, Driver, Schyns & Thut, 2011).
Meeting abstract presented at VSS 2014