September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Does a salient auditory stimulus always impair visual memory?
Author Affiliations
  • Keiji Konishi
    The University of Tokyo
  • Ryoichi Nakashima
    The University of Tokyo
  • Kazuhiko Yokosawa
    The University of Tokyo
Journal of Vision September 2018, Vol.18, 1143. doi:https://doi.org/10.1167/18.10.1143
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Keiji Konishi, Ryoichi Nakashima, Kazuhiko Yokosawa; Does a salient auditory stimulus always impair visual memory?. Journal of Vision 2018;18(10):1143. https://doi.org/10.1167/18.10.1143.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous research shows that a salient item in a scene leads to impaired memory for peripheral information (Christianson, 1992). This saliency effect has been investigated mainly within visual modality. However, it is unclear whether visual memory can be disturbed by stimulation in other modalities. This study examined the impact of a salient auditory stimulus on visual memory. We asked participants to view a stream of 156 faces and press a key if they detected a face of a specific gender. Importantly, half of all faces were immediately followed by a loud pure tone, whereas other faces were presented alone (Experiment 1) or followed by a soft tone (Experiment 2). To check the manipulation of saliency of tones, we analyzed RTs to faces, based on the suggestion about auditory alerting (Stahl & Rammseyer, 2005). Because the RTs to faces presented with loud tones were found shorter than those with soft tones or without tones, we defined loud tones as salient in both experiments. After this face detection task, participants completed a 2AFC recognition task for surprise memory test. Interestingly, the loud tones affected recognition performance differentially in these experiments. In Experiment 1 accuracy for faces with loud tones was worse than those without tones whereas in Experiment 2, accuracy for faces with loud tones was better than those with soft tones. This difference implies that participants dealt with these stimuli (i.e., faces and tones) based on a prior assumption. In Experiment 1, faces and tones were perceived separately because tones were not always presented. In Experiment 2, each face may be bound with the co-occurring tone into a unitary multisensory event, because both stimuli existed in every trial. In conclusion, the effect by a salient tone can spread over the face representation when both tone and face presentation made the "audio-visual unity."

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×