Purchase this article with an account.
Erika Kumakura, Kazuhiko Yokosawa; Acquiring multiple cross-modal correspondences. Journal of Vision 2015;15(12):852. doi: https://doi.org/10.1167/15.12.852.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Cross-modal correspondences are tendencies to match co-occurring features across sensory modalities that stem from statistical relations of co-occurring features in daily experience (Spence, 2011). Previous studies have demonstrated that a novel one-to-one correspondence between different sensory features can be acquired through perceptual learning (Ernst, 2007; Michel & Jacobs, 2007). However, it is not certain whether a generalized representation of cross-modal correspondence can be formed merely from co-occurrence of multiple features. Also unknown is whether perceptual learning of correspondences can happen in noisy environments. This study investigated whether subjects can form a generalized representation from correspondences among specific multiple features (Exp.1); it also considered whether other co-occurring features, which are randomly selected (i.e. accessory features), affect perceptual learning (Exp.2). Subjects had to judge whether the contrast of a presented Gabor patch was “High” or “Low” while listening to a pure tone. During this task, the specific pair of visual orientation and auditory loudness co-occurred with either high or low contrast (i.e. Exemplars). In Exp.2, as accessory features, pitch features co-occurred and were randomly selected with other features. We analyzed subjects' judgments to the medium contrast which co-occurred with all pairs of orientation and loudness. We predicted that if subjects can learn statistical relationships of multiple features and generalize these, then their judgments of the medium contrast should be modulated according to the co-occurred pair of orientation and loudness. In Exp.1, subjects' judgments of the medium contrast revealed a partial modulation to “High" when the co-occurring pair of orientation and loudness appeared with high contrast, but not vice versa. Exp. 2, involving pitch manipulation, resulted in more extensive learning than in Exp 1. These results suggest that multiple feature correspondence can be generalized through perceptual learning, within limits; it also suggests that accessory features may facilitate this learning.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only