December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Role of the Pulvinar in Visual Affective Scene Processing
Author Affiliations
  • Lihan Cui
    J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
  • Yun Liang
    J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
  • Ke Bo
    Department of Psychological and Brain Sciences, Dartmouth College
  • Andreas Keil
    Department of Psychology and NIMH Center for Emotion and Attention, University of Florida
  • Mingzhou Ding
    J Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
Journal of Vision December 2022, Vol.22, 3415. doi:https://doi.org/10.1167/jov.22.14.3415
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lihan Cui, Yun Liang, Ke Bo, Andreas Keil, Mingzhou Ding; Role of the Pulvinar in Visual Affective Scene Processing. Journal of Vision 2022;22(14):3415. https://doi.org/10.1167/jov.22.14.3415.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The pulvinar is the largest nucleus of the thalamus. The proposed functions of the pulvinar range from working memory to attention to emotion. In this study, we examined the role of the pulvinar in visual affective scene processing by analyzing two independent fMRI datasets. In one dataset, simultaneous EEG-fMRI data were recorded from 20 human participants viewing pleasant, unpleasant, and neutral scenes from the International Affective Picture System (IAPS). In the other dataset, fMRI data were recorded from 30 human participants performing an emotion reappraisal task, in which they were cued to anticipate and then viewed unpleasant and neutral scenes from the IAPS. Analyzing single-trial BOLD responses using machine learning and AI-inspired decoding techniques, we found that linear machine learning methods such as SVM were not able to consistently detect the differences in neural representations of different categories of affective scenes in the pulvinar, whereas a deep neural network (DNN) based model was able to consistently do so with significantly above-chance decoding accuracies. In particular, for the unpleasant-vs-neutral decoding analysis, the DNN model was able to yield a decoding accuracy of 58.6% for the first dataset and 64.5% for the second dataset. A weight map analysis on both datasets further revealed that the medial pulvinar, especially the right medial pulvinar, is the most important contributor to the DNN decoding performance. These results demonstrated that the pulvinar contributes to the processing of visual affective scenes and AI-inspired techniques can detect functional relationships not readily detected by the conventional machine learning methods.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×