August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Computing a unique neural fingerprint of bodily expressions and actions
Author Affiliations
  • Vojtech Smekal
    Maastricht University
  • Marta Poyo Solanas
    Maastricht University
  • Beatrice de Gelder
    Maastricht University
Journal of Vision August 2023, Vol.23, 5567. doi:https://doi.org/10.1167/jov.23.9.5567
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vojtech Smekal, Marta Poyo Solanas, Beatrice de Gelder; Computing a unique neural fingerprint of bodily expressions and actions. Journal of Vision 2023;23(9):5567. https://doi.org/10.1167/jov.23.9.5567.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Mid-level features of bodies and body expressions have been proposed as behaviorally relevant information coded in the brain, bridging the gap between low-level visual features and high-level cognitive labels based in semantics. Previously, these features were defined as individual components of body movements and postures (de Gelder & Poyo Solanas, 2021). Here, we take a novel, bottom-up approach, defining unique data-driven features, which characterize body expressions. Methods: In a 3T MR scanner (2mm isotropic; TR: 1300ms), participants were presented with body stimuli displaying six different actions performed by six actors (self-protecting, greeting a friend, expressing frustration, brushing off, peeling a banana, searching for an object). To investigate the effect of dynamics, the stimuli were presented in three conditions: a video, a single still frame from the video, and a video with the frames scrambled in a random order. The faces were blurred, and each stimulus presented for 1s. Each video was also analyzed to extract the coordinates of 21 body key points for each frame of the video. We then used principal component analysis, hierarchical clustering, and searchlight representational similarity analysis on these coordinates to characterize the movements in terms of key point-derived features. Results: We found several cortical regions that showed greater activity for dynamic than static stimuli, differentiated between the different action categories, and which preferred either normal-order videos or frame-scrambled videos. Specifically, the activity of the left extrastriate body area (EBA) differentiated significantly between the different actions, and the right temporoparietal junction preferred normal-order videos over frame-scrambled ones. We also found systematic relations between bottom-up defined features of body expressions and brain regions. Conclusion: For each body action, we could systematically relate computationally defined features of the body expression to specific brain activity in a wide range of cortical regions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×