December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Social action understanding after late sight recovery from congenital near-blindness
Author Affiliations & Notes
  • Ilana Naveh
    The Hebrew University of Jerusalem
  • Sara Attias
    The Hebrew University of Jerusalem
  • Asael Y. Sklar
    The Hebrew University of Jerusalem
  • Ehud Zohary
    The Hebrew University of Jerusalem
  • Footnotes
    Acknowledgements  Supported by the DFG German-Israeli Project Cooperation grant #Z0 349/1
Journal of Vision December 2022, Vol.22, 3974. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ilana Naveh, Sara Attias, Asael Y. Sklar, Ehud Zohary; Social action understanding after late sight recovery from congenital near-blindness. Journal of Vision 2022;22(14):3974.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Understanding actions performed by others and interpreting their emotional context is commonplace in our daily life: We readily assess complex social settings from minimal visual cues. Theories of action understanding typically assume extensive experience of action observation in early life, for its assimilation. But what happens if pattern vision in these years is extremely limited. We studied Ethiopian children, born with dense bilateral cataracts, who were surgically treated only years later. These patients have relatively poor visual acuity, even after surgery, and typically have difficulty in interpreting facial gestures. We tested whether other visual cues, such as body-configuration and motion signals, which are relatively preserved despite image blur, allow them to understand social situations. Seven newly-sighted patients viewed videos or still images of human interactions, categorizing them as "friendly" or "aggressive". The conditions were (1) animations ("Full-Body" condition); (2) point-light displays (containing only motion information; "PLD"), and (3) a static snapshot from the animation (preserving only configural information, "ST"). The patients performed worse than normally-developing subjects in all conditions. However, they performed significantly above chance in the Full-Body condition (79.8% correct, p=0.001), as well as in the impoverished conditions (67.9% correct, p=0.023 and 66.7% correct, p=0.001 in the ST and PLD conditions, respectively). In another test, the patients were asked to categorize images of people as "scared" or "angry" based on their facial expressions. Facial expression and body posture were manipulated orthogonally. Unlike controls, patients’ response was solely affected by the body posture (62.1% correct, p=0.015), and not by the facial expression (47.1% correct, p=0.203). We conclude that interpreting social situations can be acquired despite prolonged early-onset visual deprivation, by using biological motion or configural-body cues. Future experiments will clarify whether this capability had developed despite near-blindness conditions before surgery, or was acquired through visual experience following pattern vision recovery.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.