Abstract
The pulvinar is the largest nucleus of the thalamus. The proposed functions of the pulvinar range from working memory to attention to emotion. In this study, we examined the role of the pulvinar in visual affective scene processing by analyzing two independent fMRI datasets. In one dataset, simultaneous EEG-fMRI data were recorded from 20 human participants viewing pleasant, unpleasant, and neutral scenes from the International Affective Picture System (IAPS). In the other dataset, fMRI data were recorded from 30 human participants performing an emotion reappraisal task, in which they were cued to anticipate and then viewed unpleasant and neutral scenes from the IAPS. Analyzing single-trial BOLD responses using machine learning and AI-inspired decoding techniques, we found that linear machine learning methods such as SVM were not able to consistently detect the differences in neural representations of different categories of affective scenes in the pulvinar, whereas a deep neural network (DNN) based model was able to consistently do so with significantly above-chance decoding accuracies. In particular, for the unpleasant-vs-neutral decoding analysis, the DNN model was able to yield a decoding accuracy of 58.6% for the first dataset and 64.5% for the second dataset. A weight map analysis on both datasets further revealed that the medial pulvinar, especially the right medial pulvinar, is the most important contributor to the DNN decoding performance. These results demonstrated that the pulvinar contributes to the processing of visual affective scenes and AI-inspired techniques can detect functional relationships not readily detected by the conventional machine learning methods.