Purchase this article with an account.
Asumi Hayashi, Kazuhiko Yokosawa; Acquirement of cross-modal correspondence from mere experience. Journal of Vision 2018;18(10):1133. doi: https://doi.org/10.1167/18.10.1133.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Cross-modal perception implies we know which signals belong together and how features relate. Previous studies have demonstrated a correspondence between different sensory features can be acquired through perceptual learning (Ernst, 2007; Seitz et al., 2007). However, it remains uncertain whether we can extract the structure of cross-modal correspondence when we merely learn several one-to-one correspondences. This study investigated whether subjects can extract the occurrence of correspondences between visual space and auditory pitch from a few stimuli. In the experiment, 5/10/15 pairs of a pure tone and a small black circle on a display were presented consecutively. Four rules - 'the higher pitch and the higher position,' 'the higher pitch and the more to the right position,' and the reverse of each, respectively with a 25 percent possibility – governed presentation of the stimuli. The frequency of the pure tone (200-900Hz, by 1Hz) and the position of the circle (700２pxs, by 1px) were variable, but they were either related to each other based on one of these 4 rules or they were not related. After these clue stimuli, only a pure tone (200/375/550/725/900Hz) was presented and the participants guessed the position of the circle. We predicted if the subjects succeed in rule extraction, they could deduce the circle height or right-left position corresponded to the auditory pitch according to the rule. Results indicate deduced circle height or right-left position mapped linearly along with pitch height in accordance with each rule. This means participants could extract the rule and prefigure the circle position quite precisely when 10 or 15 clue stimuli were presented; even with five clue stimuli considerable precision was observed. These results suggest we can acquire occurrence of cross-modal correspondence between visual space and auditory pitch from mere experience and render according predictions.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only