Purchase this article with an account.
Luca Vizioli, Kendrick Kay, Junpeng Lao, Meike Ramon; Neural representations of visual stimuli are influenced by cognitive load. Journal of Vision 2016;16(12):1237. doi: 10.1167/16.12.1237.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Understanding how humans form a coherent percept of the visual word represents a major endeavor for cognitive and vision scientists alike. Variations in task demands elicit different neural representations of identical visual input. However, it remains unclear how and where in the brain external and internal inputs interact. To address this fundamental question, we recorded the BOLD signal (whole brain scan; TR=2s; voxel size=2-mm isotropic) of 10 participants during different tasks with identical visual stimuli. In a fast, event-related experiment, participants viewed images of personally familiar faces or edible objects while performing either an identity task or a category-membership (fe/male, fruit/vegetable) decision task. Importantly, we manipulated the spatial frequency (SF) content by parametrically varying the amount of low-pass filtering to simulate the information available at different viewing distances, thereby controlling cognitive load. We compared the performance of three models in predicting the beta weights elicited by each stimulus under different task constraints across the whole brain. Model 1 was a monotonic function of SF level (i.e., bottom-up model); model 2 was a nonlinear combination of the same monotonic SF level function and participants' RT (representing a proxy of cognitive load); and model 3 was a nonlinear combination of the monotonic SF level function, participants' RT and accuracy scores (representing a proxy of the information required to fulfill the task). Our data shows that, behaviorally, RT and SF information requirements varied across tasks. Crucially these differences drive neural response accordingly: cognitive load and task-dependent information requirements increased model accuracy in parietal and ventral cortices (specifically in the FFA for face tasks). These results confirm that neural activation elicited by identical retinal inputs is not invariant, but shaped by top down task constraints. Specifically, we posit that cognitive load plays a crucial role in influencing neural representations.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only