September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Visual experience is necessary for dissociating face- and language-processing in the ventral visual stream
Author Affiliations
  • Elizabeth J. Saccone
    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
  • Akshi
    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
  • Judy S. Kim
    Center for Human Values, Princeton University, Princeton, NJ, USA
  • Mengyu Tian
    Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
  • Marina Bedny
    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
Journal of Vision September 2024, Vol.24, 197. doi:https://doi.org/10.1167/jov.24.10.197
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elizabeth J. Saccone, Akshi, Judy S. Kim, Mengyu Tian, Marina Bedny; Visual experience is necessary for dissociating face- and language-processing in the ventral visual stream. Journal of Vision 2024;24(10):197. https://doi.org/10.1167/jov.24.10.197.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The contributions of innate predispositions versus experience to face-selectivity in vOTC is hotly debated. Recent studies with people born blind suggest face specialization emerges regardless of experience. In blindness the FFA is said to process face shape, accessed through touch or sound, or maintain its behavioral role in person recognition by specializing for human voices. We hypothesized instead that in blind people the anatomical location of the FFA responds to language. While undergoing fMRI, congenitally blind English speakers (N=12) listened to spoken language (English), foreign speech (Russian, Korean, Mandarin), non-verbal vocalizations (e.g., laughter) and control non-human scene sounds (e.g., forest sounds) during a 1-back repetition task. Participants also performed a ‘face localizer’ task by touching 3D printed models of faces and control scenes and a language localizer (spoken words > backwards speech, Braille > tactile shapes). We identified individual-subject ROIs inside a FFA mask generated from sighted data. In people born blind, the anatomical location of the FFA showed a clear preference for language over all other sounds, whether human or not. Responses to spoken language were higher than to foreign speech or non-verbal vocalizations, which were not different from scene sounds. This pattern was observed even in parts of vOTC that responded more to touching faces. Specialization for faces in vOTC is influenced by experience. In the absence of vision, lateral vOTC becomes implicated in language. We speculate that shared circuits that evolved for communication specialize for either face recognition or language depending on experience.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×