December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Naturalistic two-person social perception in the brain
Author Affiliations & Notes
  • Emalie McMahon
    Johns Hopkins University
  • Michael Bonner
    Johns Hopkins University
  • Leyla Isik
    Johns Hopkins University
  • Footnotes
    Acknowledgements  This work was supported by NSF GRFP (DGE-1746891) awarded to Emalie McMahon.
Journal of Vision December 2022, Vol.22, 4006. doi:https://doi.org/10.1167/jov.22.14.4006
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emalie McMahon, Michael Bonner, Leyla Isik; Naturalistic two-person social perception in the brain. Journal of Vision 2022;22(14):4006. https://doi.org/10.1167/jov.22.14.4006.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In a bustling social event, like the VSS Tiki Bar, we quickly and effortlessly perceive who is interacting with whom and details of their interactions such as whether our colleagues are engaged in a friendly or adversarial debate. Extracting these social details is crucial for deciding how we want to act. While we do this with ease, little is understood about how this is solved in the mind and brain. Although recent research has shown that a region in the posterior superior temporal sulcus (pSTS) is visually selective for social interactions, which features of a social interaction this and other regions of the brain represent is unknown. To answer this question, we showed participants 250 3-second video clips of naturalistic two-person interactions in the fMRI experiment. The stimulus set was curated to limit low-level confounds such that early features from an ImageNet-trained AlexNet were minimally correlated with social dimensions. The videos varied in sociality, social dimensions (e.g., valence, arousal, and cooperativity), and visual dimensions (e.g., the distance of the agents and the spatial expanse of the scene). Each participant separately completed functional localizers to define category-selective regions such as scene, social interaction, and theory of mind regions. We used an encoding model approach to investigate where social and visual dimensions are represented in the brain. After controlling for low-level information and motion energy, we validated that scene information such as indoor/outdoor and the spatial expanse of the scenes were represented in scene regions (PPA and OPA). Crucially, we found that the presence of a social interaction is represented in the pSTS, replicating prior findings in a curated, naturalistic dataset. We will use multivariate, whole-brain analysis to investigate where high-level features of social interactions are represented in the brain.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×