September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Unitization of audio-visual conjunctions is reflected by shifts in processing architecture
Author Affiliations & Notes
  • Jackson C Liang
    Department of Psychology, University of Toronto
  • Layan A Elfaki
    Department of Psychology, University of Toronto
  • Morgan D Barense
    Department of Psychology, University of Toronto
Journal of Vision September 2019, Vol.19, 188a. doi:https://doi.org/10.1167/19.10.188a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jackson C Liang, Layan A Elfaki, Morgan D Barense; Unitization of audio-visual conjunctions is reflected by shifts in processing architecture. Journal of Vision 2019;19(10):188a. https://doi.org/10.1167/19.10.188a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Unitization is thought to integrate multiple features into a single unit across repeated learning instances; thus, the combination of features unique to your pet dog becomes elevated above a sea of overlapping canine features at the park. There is ample evidence consistent with unitization across many domains, yet how unitization is implemented within the human brain is largely unconfirmed. Here we test two central hypotheses: first, unitizing audio-visual object conjunctions should promote parallel processing during perceptual judgments involving the component features. Second, processing architecture should vary according to the representational hierarchical view, which proposes that the neural representations of feature conjunctions are organized hierarchically relative to the individual features themselves. Thus, unitization benefits highly familiar conjunctions, while other conjunctions benefit little even when they comprise the same underlying features. Participants learned to identify conjunctions of birdcalls (A through D) and bird images (1 through 4) as belonging to a Lake or River (e.g., A1 and B2 were Lake birds, while C3 and D4 were River birds). We constructed an Intact set of birds that matched directly trained conjunctions (e.g., A1 and B2), and a Recombined set of birds that were never directly trained (e.g., A2 and B1). We tested participants’ ability to identify Lake features while manipulating the saliency of the audio and visual features via rainfall-like audio and visual noise. The resulting reaction time distributions were analyzed using the Systems Factorial Technology framework to determine whether the audio-visual processing architecture was more parallel, serial, or coactive. Consistent with unitization theory, we observed survivor interaction contrasts (SICs) indicating parallel processing for Intact birds. Futhermore, we observed SICs consistent with serial processing for Recombined birds, despite sharing features with the Intact birds. These data show how unitization sharpens perceptual processing for familiar conjunctions and is robust to confusion from overlapping features.

Acknowledgement: NSERC (for M.D.B.) and James S McDonnell Foundation Grant (for M.D.B.) 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×