August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Neural representations of visual stimuli are influenced by cognitive load
Author Affiliations
  • Luca Vizioli
    Center for Magnetic Resonance Research (CMRR), University of Minnesota
  • Kendrick Kay
    Center for Magnetic Resonance Research (CMRR), University of Minnesota
  • Junpeng Lao
    Department of Psychology, University of Fribourg
  • Meike Ramon
    Department of Psychology, University of Fribourg
Journal of Vision September 2016, Vol.16, 1237. doi:10.1167/16.12.1237
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Luca Vizioli, Kendrick Kay, Junpeng Lao, Meike Ramon; Neural representations of visual stimuli are influenced by cognitive load. Journal of Vision 2016;16(12):1237. doi: 10.1167/16.12.1237.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Understanding how humans form a coherent percept of the visual word represents a major endeavor for cognitive and vision scientists alike. Variations in task demands elicit different neural representations of identical visual input. However, it remains unclear how and where in the brain external and internal inputs interact. To address this fundamental question, we recorded the BOLD signal (whole brain scan; TR=2s; voxel size=2-mm isotropic) of 10 participants during different tasks with identical visual stimuli. In a fast, event-related experiment, participants viewed images of personally familiar faces or edible objects while performing either an identity task or a category-membership (fe/male, fruit/vegetable) decision task. Importantly, we manipulated the spatial frequency (SF) content by parametrically varying the amount of low-pass filtering to simulate the information available at different viewing distances, thereby controlling cognitive load. We compared the performance of three models in predicting the beta weights elicited by each stimulus under different task constraints across the whole brain. Model 1 was a monotonic function of SF level (i.e., bottom-up model); model 2 was a nonlinear combination of the same monotonic SF level function and participants' RT (representing a proxy of cognitive load); and model 3 was a nonlinear combination of the monotonic SF level function, participants' RT and accuracy scores (representing a proxy of the information required to fulfill the task). Our data shows that, behaviorally, RT and SF information requirements varied across tasks. Crucially these differences drive neural response accordingly: cognitive load and task-dependent information requirements increased model accuracy in parietal and ventral cortices (specifically in the FFA for face tasks). These results confirm that neural activation elicited by identical retinal inputs is not invariant, but shaped by top down task constraints. Specifically, we posit that cognitive load plays a crucial role in influencing neural representations.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.