Purchase this article with an account.
Naotsugu Tsuchiya, Hiroto Kawasaki, Matthew Howard, Ralph Adolphs; Decoding frequency and timing of emotion perception from direct intracranial recordings in the human brain. Journal of Vision 2008;8(6):962. doi: https://doi.org/10.1167/8.6.962.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
How do regions of higher-order visual cortex represent information about emotions in facial expressions? This question has received considerable interest from fMRI, lesion, and electrophysiological studies. The most influential model of face processing argues that static aspects of a face, such as its identity, are encoded primarily in ventral temporal regions while dynamic information, such as emotional expression, depends on lateral and superior temporal sulcus and gyrus. However, supporting evidence comes mainly from clinical observation and fMRI, both of which lack temporal resolution for information flow. Recently, an alternative theory has been proposed which suggests that common initial processing for both aspects occurs in the ventral temporal cortex. To test these competing hypotheses, we studied electrophysiological responses in 9 awake human patients undergoing epilepsy monitoring, in whom over 120 sub-dural electrode contacts were implanted in ventral temporal (including fusiform face area, FFA) and lateral temporal (including superior temporal sulcus, STS) cortex. The patients viewed static and dynamic facial expressions of emotion while they performed either a gender discrimination or an emotion discrimination task.
We used a novel decoding method that quantified the information about the facial stimulus that is available from the time-varying neuronal oscillation in the field potential. We estimated the stimulus-induced oscillation from a time-frequency spectral analysis using a multi-taper method. This time-frequency representation of the response was then subjected to a multivariate decoding analysis.
Our analysis revealed that ventral temporal cortex rapidly categorizes faces from non-face objects within 100ms. We found that ventral temporal cortex represents emotion in dynamic morphing faces more quickly and accurately than lateral temporal cortex. Finally we found that the quality of represented information in ventral temporal cortex is substantially modulated by task-relevant attention.
This PDF is available to Subscribers Only