September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Neural Correlates of Integration Processes during Dynamic Face Perception
Author Affiliations & Notes
  • Nihan Alp
    Sabancı University, Faculty of Arts and Social Sciences
  • Huseyin Ozkan
    Sabancı University, Faculty of Engineering and Natural Sciences
  • Footnotes
    Acknowledgements  This work is supported by Starting Grant from Sabancı University (B.A.CG-19-01966).
Journal of Vision September 2021, Vol.21, 1846. doi:https://doi.org/10.1167/jov.21.9.1846
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nihan Alp, Huseyin Ozkan; Neural Correlates of Integration Processes during Dynamic Face Perception. Journal of Vision 2021;21(9):1846. doi: https://doi.org/10.1167/jov.21.9.1846.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Integrating the spatiotemporal information that is constantly presented by the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is extremely important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Our study separates the composite neural correlates of the spatial and temporal integration processes from each other. Thus we provide statistically robust observations about the brain activations that are specific to the temporal integration or specific to the spatial integration. For this purpose, we generate video recordings of neutral faces of individuals and non-face objects, and then frequency tag (modulate contrast in an interlaced manner) the even and odd frames at two specific frequencies (f1 and f2). Here, tagging aims to not only increase the signal-to-noise ratio (SNR) of steady-state visual evoked potentials (SSVEP) but also helps us trace the nonlinear processing at the neural level while the temporally separated and frequency-tagged frames are integrated. To this end, we measure SSVEP as participants view such generated videos, and analyze the intermodulation components (IMs) that are designed to reflect nonlinear processing and indicate temporal integration. A pattern analysis is additionally conducted to quantify the information in SSVEP and assess the statistical robustness of our observations. We show that the medial temporal, inferior and medial frontal areas respond strongly and selectively when viewing dynamic faces. These regions also increase their activation as a function of increasing sequential dynamic information, hence manifesting the essential processes underlying our ability to perceive and understand our social world. Since the generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, the strong IMs show that the time between frames (1/60=0.016 seconds) is sufficient for temporal integration.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×