September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Semantic and Visual Features Drive the Intrinsic Memorability of Co-Speech Gestures
Author Affiliations
  • Xiaohan (Hannah) Guo
    The University of Chicago
  • Susan Goldin-Meadow
    The University of Chicago
  • Wilma A. Bainbridge
    The University of Chicago
Journal of Vision September 2024, Vol.24, 599. doi:https://doi.org/10.1167/jov.24.10.599
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xiaohan (Hannah) Guo, Susan Goldin-Meadow, Wilma A. Bainbridge; Semantic and Visual Features Drive the Intrinsic Memorability of Co-Speech Gestures. Journal of Vision 2024;24(10):599. https://doi.org/10.1167/jov.24.10.599.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Co-speech gestures that teachers spontaneously produce during explanations have been shown to benefit students’ learning. Further, prior work suggests that information conveyed through teachers’ gestures is less likely to deteriorate than through speech (Church et al., 2007). However, how intrinsic features of gestures affect students’ memory remains unclear. The memorability effect denotes a phenomenon where adults with different backgrounds consistently remember and forget particular visual stimuli (static images, dance moves, etc.), owing to the stimuli's intrinsic semantic and visual features. In this study, we investigate whether certain gestures are consistently remembered and, if so, which semantic and visual features are associated with these remembered gestures. We first created 360 10-second audiovisual stimuli by video recording 20 actors producing unscripted natural speech and gestures as they explained Piagetian conservation problems. Two trained experimenters extracted high-level semantics and low-level visual/acoustic features in speech and gesture for each audiovisual stimulus. We then tested online participants’ memories in three different conditions using a between-subjects study-test paradigm: the audiovisual stimuli (gesture+speech condition), the visual-only version of the same stimuli (gesture condition), and the audio-only version of the stimuli (speech condition). Within each of the two experimental blocks, participants encoded nine random stimuli from an actor and made old/new judgments on all 18 stimuli from the same actor immediately after. We discovered that participants show significant consistencies in their memory for the gesture, gesture+speech, and speech stimuli. Focusing on the visual-only (gesture) condition, we found that (1) more meaningful gestures and speech predicted more memorable gestures; (2) using both hands led to more memorable gestures than using one hand. Our results suggest that both semantic (conveyed through speech and gestures) and visual (conveyed through gesture) features make co-speech gestures memorable.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×