September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Statistical learning of cross-modal correspondence with non-linear mappings
Author Affiliations & Notes
  • Kazuhiko Yokosawa
    The University of Tokyo
  • Asumi Hayashi
    The University of Tokyo
  • Ryotaro Ishihara
    The University of Tokyo
Journal of Vision September 2019, Vol.19, 274a. doi:https://doi.org/10.1167/19.10.274a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kazuhiko Yokosawa, Asumi Hayashi, Ryotaro Ishihara; Statistical learning of cross-modal correspondence with non-linear mappings. Journal of Vision 2019;19(10):274a. doi: https://doi.org/10.1167/19.10.274a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Cross-modal correspondences help us to determine which signals belong together when we receive information from multiple sensory systems. Many studies have shown people exhibit cross-modal correspondences involving features from various, different, sensory modalities. Moreover, some correspondences seem to reflect natural statistical mappings, thus implying that humans may acquire knowledge of cross-modal correspondences through daily perceptual experiences. We have demonstrated that participants can guess relationships between visual space and auditory pitch merely from perceptual experience, and predict visual stimuli according to audio stimuli (Hayashi & Yokosawa, 2018). However, such results can also be explained by considering evidence that they simply applied linear relationships to visual and auditory stimuli instead of assuming continuous relationship. Therefore, the present study aimed to clarify whether people can acquire complicated combinational relationship between visual and auditory feature. In this experiment, 5 or 15 stimulus pairs, presented consecutively, involved a pure tone and a small black disc (on a screen). Disc positions and tone pitch were determined by one of 8 kinds of curvilinear correlations in each trial. Also, the pairs were unrelated in some trials. After exposure to 5 or 15 pairs of stimuli, participants heard a pure tone of a particular (varied) frequency. Next, they guessed the spatial location of the disc previously related to frequency. Results show that participants extracted most of the complicated curvilinear relationships between visual and auditory feature, instead of just relying upon a linear relationship. Furthermore, participants could predict disc spatial positions more precisely after they were presented with 15 pairs of visual-audio stimuli than when they presented with 5 pairs of stimuli. In sum, participants learn perceptual relationships, even curvilinear ones. They can reconstruct non-linear continuous mapping of visual and auditory features from limited experiences.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×