September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Neural Correlates of Dynamic Face Perception
Author Affiliations
  • Huseyin Ozkan
    Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Sharon Gilad-Gutnick
    Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Evan Ehrenberg
    Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Pawan Sinha
    Brain and Cognitive Sciences, Massachusetts Institute of Technology
Journal of Vision August 2017, Vol.17, 266. doi:10.1167/17.10.266
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Huseyin Ozkan, Sharon Gilad-Gutnick, Evan Ehrenberg, Pawan Sinha; Neural Correlates of Dynamic Face Perception. Journal of Vision 2017;17(10):266. doi: 10.1167/17.10.266.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Past research on the electrophysiology of face perception has focused almost exclusively on brain responses to artificial stimuli that are transient and static. Therefore, our knowledge of the electrophysiological correlates of face perception is rudimentary, consisting mostly of averaged ERP responses in the first 200 ms after stimulus onset, and lacking virtually any description of how our brain may respond to more naturally occurring dynamic faces. Our goal was to characterize the neural correlates of naturally occurring dynamic faces over a more sustained presentation time (500ms). To this end, we recorded Magnetoencephalography responses to both dynamic and static face and non-face stimuli and used both traditional ERF component analysis to compare our results to the M100 and M170 face responses, as well as machine learning techniques to reveal other representations of viewing a dynamic face. In our ERF analyses, we observe that the dynamic-face induced ERFs have larger M100 and M170 responses (M170 is ~40ms earlier) compared to the static-face ERFs. In our classification analyses, the face vs non-face classification performance is shown to constantly improve as a larger time window is used, until 500ms, yielding ~80% accuracy at 500ms for both dynamic and static stimuli. Hence, the information of face-ness is not specific to a time interval but rather distributed (more widely in the case of dynamic stimuli) in the full temporal content. Finally, this strong face selectivity is achieved at the sensors that probe the temporal lobes for dynamic stimuli, and the occipital lobes for static stimuli. Overall, our results both provide new correlates of dynamic face perception and emphasize the critical information that lies in looking at sustained responses rather than the traditional transient responses to static faces.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×