September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Representational dynamics of facial viewpoint encoding: Head orientation, viewpoint symmetry, and front-on views
Author Affiliations
  • Tim Kietzmann
    Institute of Cognitive Science, University of Osnabrück, 49076 Osnabrück, Germany
  • Anna Gert
    Institute of Cognitive Science, University of Osnabrück, 49076 Osnabrück, Germany
  • Peter König
    Institute of Cognitive Science, University of Osnabrück, 49076 Osnabrück, Germany Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
Journal of Vision September 2015, Vol.15, 750. doi:10.1167/15.12.750
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Tim Kietzmann, Anna Gert, Peter König; Representational dynamics of facial viewpoint encoding: Head orientation, viewpoint symmetry, and front-on views. Journal of Vision 2015;15(12):750. doi: 10.1167/15.12.750.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Faces provide a large variety of information, including the identity of the seen person, their emotional state and social cues, such as the direction of gaze. Crucially, these different aspects of face processing require distinct forms/types of viewpoint encoding. Whereas another person’s attentional focus is supported by a view-based code, identification requires the opposite: a fully viewpoint-invariant representation. Different cortical areas have been suggested to provide either function. However, little is known about temporal aspects of viewpoint encoding in the human brain. Here, we combine electroencephalography (EEG) measurements with multivariate decoding techniques to resolve the dynamics of face processing with high temporal resolution. Data were recorded while subjects were presented with faces shown from 37 viewpoints. We then used the resulting patterns of visually evoked potentials to compute representational similarity matrices across time, and performed data- and model-driven analyses to reveal changes in the underlying cortical selectivity, while controlling for effects of low-level stimulus properties and eye-movement artifacts. These analyses revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, potentially driven by low-level stimulus features. Shortly afterwards, at a latency of about 130ms, these were followed by strong effects of viewpoint symmetry, i.e. the joint selectivity for mirror-symmetric viewing angles, which were previously suggested to support subsequent, viewpoint-invariant identity recognition. At a considerably later stage, about 280ms after visual onset, EEG response patterns demonstrate a large degree of viewpoint invariance across almost all viewpoints tested, with the marked exception of front-on faces, the only viewing angle exhibiting direct eye-contact. Taken together, our results indicate that the encoding of facial viewpoints follows a temporal sequence of coding schemes, including invariance to symmetric viewpoints as a separate stage, supporting distinct task requirements at different stages of face processing.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×