Abstract
Faces provide a large variety of information, including the identity of the seen person, their emotional state and social cues, such as the direction of gaze. Crucially, these different aspects of face processing require distinct forms/types of viewpoint encoding. Whereas another person’s attentional focus is supported by a view-based code, identification requires the opposite: a fully viewpoint-invariant representation. Different cortical areas have been suggested to provide either function. However, little is known about temporal aspects of viewpoint encoding in the human brain. Here, we combine electroencephalography (EEG) measurements with multivariate decoding techniques to resolve the dynamics of face processing with high temporal resolution. Data were recorded while subjects were presented with faces shown from 37 viewpoints. We then used the resulting patterns of visually evoked potentials to compute representational similarity matrices across time, and performed data- and model-driven analyses to reveal changes in the underlying cortical selectivity, while controlling for effects of low-level stimulus properties and eye-movement artifacts. These analyses revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, potentially driven by low-level stimulus features. Shortly afterwards, at a latency of about 130ms, these were followed by strong effects of viewpoint symmetry, i.e. the joint selectivity for mirror-symmetric viewing angles, which were previously suggested to support subsequent, viewpoint-invariant identity recognition. At a considerably later stage, about 280ms after visual onset, EEG response patterns demonstrate a large degree of viewpoint invariance across almost all viewpoints tested, with the marked exception of front-on faces, the only viewing angle exhibiting direct eye-contact. Taken together, our results indicate that the encoding of facial viewpoints follows a temporal sequence of coding schemes, including invariance to symmetric viewpoints as a separate stage, supporting distinct task requirements at different stages of face processing.
Meeting abstract presented at VSS 2015