September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Faces and voices in the brain: RSA reveals modality-general person-identity representations in the STS
Author Affiliations
  • Maria Tsantani
    Division of Psychology, Department of Life Sciences, Brunel University London
  • Nikolaus Kriegeskorte
    Zuckerman Mind Brain Behavior Institute, Columbia University
  • Carolyn McGettigan
    Department of Psychology, Royal Holloway, University of London
  • Lúcia Garrido
    Division of Psychology, Department of Life Sciences, Brunel University London
Journal of Vision September 2018, Vol.18, 1139. doi:10.1167/18.10.1139
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maria Tsantani, Nikolaus Kriegeskorte, Carolyn McGettigan, Lúcia Garrido; Faces and voices in the brain: RSA reveals modality-general person-identity representations in the STS. Journal of Vision 2018;18(10):1139. doi: 10.1167/18.10.1139.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Faces and voices can both trigger the recognition of familiar people. A large body of research has separately explored the recognition of familiar faces and voices, but here we investigated whether there are modality-general representations of person-identity that can be equally driven by faces or voices. Based on previous research, we predicted that such modality-general representations could exist in multimodal brain regions (e.g. Shah et al., 2001) or in unimodal brain regions via direct coupling of face and voice regions (e.g. von Kriegstein et al., 2005). In an event-related fMRI experiment with 30 participants, we measured brain activity patterns while participants viewed the faces and listened to the voices of 12 famous people. We defined multimodal, face-selective, and voice-selective brain regions with independent localisers. We used representational similarity analysis (RSA) with the linear discriminant contrast (LDC), a crossvalidated distance measure (Walther et al., 2016), to create representational distance matrices (RDMs) of all 12 people for each brain region. We created face RDMs, voice RDMs, and crossmodal RDMs. Each cell in one of these RDMs shows the neural discriminability of a pair of identities. For the crossmodal RDMs, these LDC distances show whether the discriminant based on the activity patterns of identity pairs in one modality can be used to differentiate the activity patterns of the same identity pairs in the other modality. Under the null hypothesis LDC distance is distributed around zero, and we therefore investigated which regions showed distances significantly greater than zero. Our results showed that the mean LDC distance for crossmodal RDMs was significantly greater than zero in regions of the mid and posterior superior temporal sulcus (STS) that showed multimodal responses. These results suggest that multimodal regions of the STS represent person identity in a similar way regardless of the input modality.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×