September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Comparing auditory and visual category learning
Author Affiliations
  • Casey L. Roark
    University of New Hampshire
Journal of Vision September 2024, Vol.24, 1001. doi:https://doi.org/10.1167/jov.24.10.1001
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Casey L. Roark; Comparing auditory and visual category learning. Journal of Vision 2024;24(10):1001. https://doi.org/10.1167/jov.24.10.1001.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Categorization is a fundamental skill that spans the senses. Categories enable quick identification of visual objects in our surroundings and phonemes and words in spoken speech. While categories are ubiquitous across modalities, the amodal and modality-specific mechanisms of perceptual category learning are not well understood. I investigated learning of artificial auditory and visual categories that shared a higher-level unidimensional rule structure. If learners build amodal category representations, they should benefit from simultaneous learning of categories from different modalities that share a higher-level structure. If learners build representations separately across modalities, their learning should either be unaffected or impaired by simultaneously learning categories from different modalities. Methods: Learners were randomly assigned to learn two auditory and two visual categories either simultaneously (interleaved) or separately (blocked). The higher-level category structure was the same across modalities – learning required selective attention to one dimension (temporal modulation, spatial frequency) while ignoring a category-irrelevant dimension (spectral modulation, orientation). After 400 training trials (interleaved: auditory and visual together; blocked: auditory then visual or vice versa), participants completed two separate generalization test blocks for both modalities (counterbalanced order). Results: When learning categories separately, accuracies were no different across modalities, indicating that the categories were well-matched for difficulty. When learning categories simultaneously, learners were significantly more accurate for visual than auditory categories. Importantly, there were no significant differences in test performance across blocked and interleaved training conditions in either modality. Conclusion: These results indicate that learners build separate, modality-specific representations even when learning auditory and visual categories simultaneously. Further, learners do not exploit the shared amodal structure of categories across modalities to facilitate learning. These results have important implications for understanding learning of real-world categories, which are often multimodal, and highlights the importance of considering the role of modality in models of category learning.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×