September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Acquirement of cross-modal correspondence from mere experience
Author Affiliations
  • Asumi Hayashi
    The University of Tokyo
  • Kazuhiko Yokosawa
    The University of Tokyo
Journal of Vision September 2018, Vol.18, 1133. doi:10.1167/18.10.1133
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Asumi Hayashi, Kazuhiko Yokosawa; Acquirement of cross-modal correspondence from mere experience. Journal of Vision 2018;18(10):1133. doi: 10.1167/18.10.1133.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Cross-modal perception implies we know which signals belong together and how features relate. Previous studies have demonstrated a correspondence between different sensory features can be acquired through perceptual learning (Ernst, 2007; Seitz et al., 2007). However, it remains uncertain whether we can extract the structure of cross-modal correspondence when we merely learn several one-to-one correspondences. This study investigated whether subjects can extract the occurrence of correspondences between visual space and auditory pitch from a few stimuli. In the experiment, 5/10/15 pairs of a pure tone and a small black circle on a display were presented consecutively. Four rules - 'the higher pitch and the higher position,' 'the higher pitch and the more to the right position,' and the reverse of each, respectively with a 25 percent possibility – governed presentation of the stimuli. The frequency of the pure tone (200-900Hz, by 1Hz) and the position of the circle (700pxs, by 1px) were variable, but they were either related to each other based on one of these 4 rules or they were not related. After these clue stimuli, only a pure tone (200/375/550/725/900Hz) was presented and the participants guessed the position of the circle. We predicted if the subjects succeed in rule extraction, they could deduce the circle height or right-left position corresponded to the auditory pitch according to the rule. Results indicate deduced circle height or right-left position mapped linearly along with pitch height in accordance with each rule. This means participants could extract the rule and prefigure the circle position quite precisely when 10 or 15 clue stimuli were presented; even with five clue stimuli considerable precision was observed. These results suggest we can acquire occurrence of cross-modal correspondence between visual space and auditory pitch from mere experience and render according predictions.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×