September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Sensory Reliability Does Not Alter the Weight of Visual information in Multisensory Emotion Adaptation
Author Affiliations
  • Ka Lon Sou
    Division of Psychology, School of Humanities and Social Sciences, Nanyang Technological University, Singapore
  • Fun Lau
    Neurolinguistics & Cognitive Neuroscience Lab, Division of Linguistics and Multilingual Studies, Nanyang Technological University, Singapore
  • Hong Xu
    Division of Psychology, School of Humanities and Social Sciences, Nanyang Technological University, Singapore
Journal of Vision August 2017, Vol.17, 820. doi:https://doi.org/10.1167/17.10.820
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ka Lon Sou, Fun Lau, Hong Xu; Sensory Reliability Does Not Alter the Weight of Visual information in Multisensory Emotion Adaptation. Journal of Vision 2017;17(10):820. https://doi.org/10.1167/17.10.820.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Multisensory information is suggested to be integrated based on reliability-based model. Recently, it was suggested that perception and adaptation may be underlying different processes. Multisensory adaptation may follow a different integration model. The current study, therefore, investigates the possible integration models that explain audiovisual integration in multisensory emotion adaptation. Eighteen participants were tested in a 2-alternative forced choice adaptation paradigm. On each trial, the adaptor was shown for 3 seconds, followed by an emotion judgment task on a face morphed between happy and angry expressions. The noise information in visual adaptors was manipulated: the facial emotion recognition rate under noise was 60% - 80% as determined in a pre-experiment block. There were six conditions in the main experiment: 3 Unisensory conditions (with Auditory/Clear Visual/Noisy Visual adaptor), 2 Multisensory conditions (with Auditory with Clear Visual/Noisy Visual adaptor), and the baseline (no adaptor). All adaptors depicted anger emotion. Except for the auditory adaptor, all the other 4 adaptors (including the Noisy Visual) generated significant facial emotion aftereffects. By comparing the Bayesian Information Criteria of the three multisensory models obtained from within-subject multiple linear regressions (BIC visual-dominant = -130.299; BIC fixed-ratio = -136.125; BIC reliability-based = -130.740), our results suggested that multisensory emotion adaptation is best explained by the fixed-ratio model, with visual input contributing 62.8% regardless of the presence of the visual noise, Est./S.E. = 5.081, p < .001. Our findings indicated that in emotion adaptation, the multisensory percept is a weighted sum of the visual input and the auditory input, but the values of the weights are not determined by the reliability of the source of information. This is in contrast with multisensory emotion perception. We discuss the possibility that, on top of the perception of emotion, neural areas that are responsible for higher level executive functions are also adapted during multisensory adaptation.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×