August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Top-down effects on Cross-Modal Stimulus Processing: A Predictive Coding Framework
Author Affiliations & Notes
  • Soukhin Das
    Center for Mind and Brain, University of California Davis
    Department of Psychology, University of California Davis
  • Sreenivasan Meyyappan
    Center for Mind and Brain, University of California Davis
    Department of Psychology, University of California Davis
  • Mingzhou Ding
    Pruitt Family Department of Biomedical Engineering, University of Florida
  • George R. Mangun
    Center for Mind and Brain, University of California Davis
    Department of Psychology, University of California Davis
  • Footnotes
    Acknowledgements  This work was supported by NIMH grant MH117991.
Journal of Vision August 2023, Vol.23, 5801. doi:https://doi.org/10.1167/jov.23.9.5801
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Soukhin Das, Sreenivasan Meyyappan, Mingzhou Ding, George R. Mangun; Top-down effects on Cross-Modal Stimulus Processing: A Predictive Coding Framework. Journal of Vision 2023;23(9):5801. https://doi.org/10.1167/jov.23.9.5801.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies have shown that attention can operate across different sensory modalities, such as vision and audition, and play a crucial role in our ability to integrate and process multisensory information. However, the neural mechanisms underlying cross-modal attention remain largely unknown. We used event-related potentials (ERPs) to investigate the neural basis of cross-modal attention using a 2x2 design where auditory (HEAR or SEE) or visual cues (H or S) were used to indicate the modality (visual/auditory) of the to-be-attended target. After a random delay, in 80% of the trials, auditory tones, or visual gratings were presented as target stimuli in the cued modality. For the remaining trials (20%), the targets were presented in the un-cued modality (invalid trials). The participants (n=30) were instructed to distinguish the frequency (wide versus narrow) of the visual gratings or the tone (high versus low) of the auditory stimuli on all trials irrespective of the cue validity. The ERPs for targets (cued vs un-cued) showed the effects of attention in both modalities. In the auditory modality, significant differences between valid and invalid cued trials were observed in N100, P300 components in the central channels (Cz, CPz), and late positive potentials (LPP) over posterior channels (CPz, Pz). For visual targets, cueing effects were prominent in the N1-P2 and P300 over posterior and occipital channels, along with posterior LPPs. Furthermore, the amplitudes of the ERP components (auditory-N100, P300, and visual-N1-P2, P300) were enhanced for invalidly cued targets than validly cued targets. Such differences may indicate a re-orientation of top-down cross-modal signals to match the incongruent target and update internal goals and predictions, based on prior cues. Our findings for top-down modulations of early sensory processing can be aided by different aspects of predictive coding in terms of the difference between the predicted information (cued) and the actual stimuli(uncued).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×