August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Decoding the neural representations of emotional faces in stereo- versus monoscopic viewing conditions
Author Affiliations
  • Felix Klotzsche
    Max Planck Institute for Human Cognitive and Brain Sciences
    Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Germany
    Humboldt-Universität zu Berlin, Department of Psychology, Germany
  • Ammara Nasim
    Max Planck Institute for Human Cognitive and Brain Sciences
    Carl von Ossietzky Universität Oldenburg, Germany
  • Simon M. Hofmann
    Max Planck Institute for Human Cognitive and Brain Sciences
    Fraunhofer Institute Heinrich Hertz, Department of Artificial Intelligence, Berlin, Germany
  • Arno Villringer
    Max Planck Institute for Human Cognitive and Brain Sciences
    Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Germany
  • Vadim V. Nikulin
    Max Planck Institute for Human Cognitive and Brain Sciences
  • Werner Sommer
    Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Germany
    Humboldt-Universität zu Berlin, Department of Psychology, Germany
    Zhejiang Normal University, Department of Psychology, Jinhua, China
  • Michael Gaebler
    Max Planck Institute for Human Cognitive and Brain Sciences
    Humboldt-Universität zu Berlin, Berlin School of Mind and Brain, Germany
Journal of Vision August 2023, Vol.23, 5618. doi:https://doi.org/10.1167/jov.23.9.5618
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Felix Klotzsche, Ammara Nasim, Simon M. Hofmann, Arno Villringer, Vadim V. Nikulin, Werner Sommer, Michael Gaebler; Decoding the neural representations of emotional faces in stereo- versus monoscopic viewing conditions. Journal of Vision 2023;23(9):5618. https://doi.org/10.1167/jov.23.9.5618.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans represent faces as three-dimensional shapes and facial expressions as changes in the spatial configuration of relevant landmarks (e.g., eyebrows) in them. Stereopsis is a key component of the visual system for perceiving spatial depth. The availability of stereoscopic depth cues might therefore influence how human observers process faces. Here, we compare the effect of stereoscopic and monoscopic presentations of human facial expressions on the neurophysiological response by combining immersive Virtual Reality (VR) technology with EEG and multivariate decoding. 34 healthy, young participants performed an emotion recognition task (720 trials) in which renderings of three computer-generated faces showed different emotional expressions (neutral, happy, angry, surprised). All stimuli were presented as frontal portraits using an HTC Vive Pro Eye VR headset. For monoscopic trials, we showed the faces as 2D planes (rendered by a single virtual camera). For stereoscopic trials, the 3D face model was displayed (rendered by one camera per eye), providing stereoscopic depth information. We trained and cross-validated time-resolved linear classifiers to predict the displayed emotional expression from the time-locked EEG signal. Participants recognized all emotions with high accuracy in both viewing conditions. The emotional expression could be decoded above chance-level starting around 150 ms after stimulus onset. Binary classification of emotion pairs (e.g., angry-vs-neutral or happy-vs-surprised) yielded distinctive decoding time-courses. Decoding accuracy did not differ significantly between mono- and stereoscopic trials. The viewing condition itself, however, could also be decoded from the EEG with a similar time-course as decoding the (task-irrelevant) identity of the displayed face. We conclude that although stereoscopically, as compared to monoscopically, presented faces elicit a distinguishable EEG response, this does not influence the decodability of neural processes related to emotion recognition. This is evidence that the way the brain decodes emotional expressions from facial geometries—viewed from a fixed perspective—is largely uninfluenced by stereoscopic information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×