May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Decoding frequency and timing of emotion perception from direct intracranial recordings in the human brain
Author Affiliations
  • Naotsugu Tsuchiya
    Humanities and Social Sciences, California Institute of Technology
  • Hiroto Kawasaki
    Department of Neurosurgery, University of Iowa
  • Matthew Howard
    Department of Neurosurgery, University of Iowa
  • Ralph Adolphs
    Humanities and Social Sciences, California Institute of Technology
Journal of Vision May 2008, Vol.8, 962. doi:https://doi.org/10.1167/8.6.962
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Naotsugu Tsuchiya, Hiroto Kawasaki, Matthew Howard, Ralph Adolphs; Decoding frequency and timing of emotion perception from direct intracranial recordings in the human brain. Journal of Vision 2008;8(6):962. https://doi.org/10.1167/8.6.962.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do regions of higher-order visual cortex represent information about emotions in facial expressions? This question has received considerable interest from fMRI, lesion, and electrophysiological studies. The most influential model of face processing argues that static aspects of a face, such as its identity, are encoded primarily in ventral temporal regions while dynamic information, such as emotional expression, depends on lateral and superior temporal sulcus and gyrus. However, supporting evidence comes mainly from clinical observation and fMRI, both of which lack temporal resolution for information flow. Recently, an alternative theory has been proposed which suggests that common initial processing for both aspects occurs in the ventral temporal cortex. To test these competing hypotheses, we studied electrophysiological responses in 9 awake human patients undergoing epilepsy monitoring, in whom over 120 sub-dural electrode contacts were implanted in ventral temporal (including fusiform face area, FFA) and lateral temporal (including superior temporal sulcus, STS) cortex. The patients viewed static and dynamic facial expressions of emotion while they performed either a gender discrimination or an emotion discrimination task.

We used a novel decoding method that quantified the information about the facial stimulus that is available from the time-varying neuronal oscillation in the field potential. We estimated the stimulus-induced oscillation from a time-frequency spectral analysis using a multi-taper method. This time-frequency representation of the response was then subjected to a multivariate decoding analysis.

Our analysis revealed that ventral temporal cortex rapidly categorizes faces from non-face objects within 100ms. We found that ventral temporal cortex represents emotion in dynamic morphing faces more quickly and accurately than lateral temporal cortex. Finally we found that the quality of represented information in ventral temporal cortex is substantially modulated by task-relevant attention.

Tsuchiya, N. Kawasaki, H. Howard, M. Adolphs, R. (2008). Decoding frequency and timing of emotion perception from direct intracranial recordings in the human brain [Abstract]. Journal of Vision, 8(6):962, 962a, http://journalofvision.org/8/6/962/, doi:10.1167/8.6.962. [CrossRef]
Footnotes
 N.T. is supported by Japan Society for the Promotion of Science (JSPS). H.K. is supported by NIH (R03 MH070497-01A2). M.H. is supported by NIH (R01 DC004290-06). R.A. is supported by the James S. McDonnell Foundation.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×