August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
NEURAL BASIS AND DYNAMICS OF FACE AND VOICE INTEGRATION OF EMOTION EXPRESSION
Author Affiliations
  • Jodie Davies-Thompson
    Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Italy
  • Giulia V. Elli
    Department of Psychological & Brain Sciences, John Hopkins University, Baltimore, USA
  • Mohamed Rezk
    Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Italy
  • Stefania Benetti
    Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Italy
  • Markus van Ackeren
    Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Italy
  • Olivier Collignon
    Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Italy
Journal of Vision September 2016, Vol.16, 1230. doi:https://doi.org/10.1167/16.12.1230
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jodie Davies-Thompson, Giulia V. Elli, Mohamed Rezk, Stefania Benetti, Markus van Ackeren, Olivier Collignon; NEURAL BASIS AND DYNAMICS OF FACE AND VOICE INTEGRATION OF EMOTION EXPRESSION . Journal of Vision 2016;16(12):1230. https://doi.org/10.1167/16.12.1230.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Background and objective: The brain has separate specialized units to process faces and voices located in occipital and temporal cortices, respectively. However, humans seamlessly integrate signals from the face and the voice of others for optimal social interaction. How does redundant information delivered by faces and voices, like emotion expressions, is integrated in the brain? We characterized the neural basis of face-voice integration, with a specific emphasis on how face or voice selective regions interact with multisensory regions, and how emotional expression affects integration properties. Method: We presented 24 subjects with 500ms stimuli containing visual only, auditory only, or combined audio-visual information, which varied in expression (neutral, fearful). We specifically searched for regions responding more to bimodal than to unimodal stimuli, as well as examining the response in face- and voice- selective regions of interest, as defined by independent localizer scans. Finally, regions of interest were entered into dynamic causal modeling (DCM) in order to determine, using Bayesian model selection methods, the direction of information flow between these regions. Results: Using a whole-brain approach, we found that only the right posterior STS responded more to bimodal stimuli than to face or voice alone when the stimuli contained emotional (fearful) expression. No regions responded more to bimodal than unimodal neutral stimuli. Region-of-interest analysis including face- and voice-selective regions extracted from independent functional localizers similarly revealed multisensory integration in the face-selective right posterior STS only. DCM analysis revealed that the right STS receives unidirectional information from the face-selective fusiform face area (FFA), and voice-selective middle temporal gyrus (MTG), with emotional expression affecting their connections strength. Conclusion: Our study promotes a hierarchical model of integration of face and voice signal with a convergence zone in the right STS and that such integration depends on the (emotional) salience of the stimuli

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×