September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Deafness Amplifies Visual Information Sampling during Face Recognition
Author Affiliations
  • Junpeng Lao
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Chloé Stoll
    Laboratoire de Psychologie et Neurocognition (CNRS), Université Grenoble Alpes, Grenoble, France
  • Matthew Dye
    Rochester Institute of Technology/National Technical Institute for Deaf, Rochester, New York, USA
  • Olivier Pascalis
    Laboratoire de Psychologie et Neurocognition (CNRS), Université Grenoble Alpes, Grenoble, France
  • Roberto Caldara
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
Journal of Vision August 2017, Vol.17, 24. doi:10.1167/17.10.24
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Junpeng Lao, Chloé Stoll, Matthew Dye, Olivier Pascalis, Roberto Caldara; Deafness Amplifies Visual Information Sampling during Face Recognition. Journal of Vision 2017;17(10):24. doi: 10.1167/17.10.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We move our eyes to navigate, identify dangers, objects and people in a wide range of situations during social interactions. However, the extent to which visual sampling is modulated and shaped by non-visual information is difficult to control. A particular fate of nature might be helpful to achieve this feat: the occurrence of deafness. Research has shown that early profound hearing loss enhances the sensitivity and efficiency of the visual channel in deaf individuals, resulting in a larger peripheral visual attention compared to the hearing population (Dye et al., 2009). However, whether such perceptual bias extends to visual sampling strategies deployed during the biologically-relevant face recognition task remains to be clarified. To this aim, we recorded the eye movements of deaf and hearing observers while they performed a delayed matching task with upright and inverted faces. Deaf observers showed a preferential central fixation pattern compared to hearing controls, with the spatial fixation density peaking just below the eyes. Interestingly, even unlike hearing observers presenting a global fixation pattern, the deaf observers were not impaired by the face inversion and did not change their sampling strategy. To assess whether this particular fixation strategy in the deaf observers was paired with a larger information intake, the same participants performed the identical experiment with a gaze-contingent paradigm parametrically and dynamically modulating the quantity of information available at each fixation – the Expanding Spotlight (Miellet et al. 2013). Visual information reconstruction with a retinal filter revealed an enlarged visual field in deafness. Unlike hearing participants, deaf observers used larger information intake from all the fixations. This visual sampling strategy was robust and as effective for inverted face recognition. Altogether, our data show that the face system is flexible and might tune to distinct strategies as a function of visual and social experience.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×