Abstract
We move our eyes to navigate, identify dangers, objects and people in a wide range of situations during social interactions. However, the extent to which visual sampling is modulated and shaped by non-visual information is difficult to control. A particular fate of nature might be helpful to achieve this feat: the occurrence of deafness. Research has shown that early profound hearing loss enhances the sensitivity and efficiency of the visual channel in deaf individuals, resulting in a larger peripheral visual attention compared to the hearing population (Dye et al., 2009). However, whether such perceptual bias extends to visual sampling strategies deployed during the biologically-relevant face recognition task remains to be clarified. To this aim, we recorded the eye movements of deaf and hearing observers while they performed a delayed matching task with upright and inverted faces. Deaf observers showed a preferential central fixation pattern compared to hearing controls, with the spatial fixation density peaking just below the eyes. Interestingly, even unlike hearing observers presenting a global fixation pattern, the deaf observers were not impaired by the face inversion and did not change their sampling strategy. To assess whether this particular fixation strategy in the deaf observers was paired with a larger information intake, the same participants performed the identical experiment with a gaze-contingent paradigm parametrically and dynamically modulating the quantity of information available at each fixation – the Expanding Spotlight (Miellet et al. 2013). Visual information reconstruction with a retinal filter revealed an enlarged visual field in deafness. Unlike hearing participants, deaf observers used larger information intake from all the fixations. This visual sampling strategy was robust and as effective for inverted face recognition. Altogether, our data show that the face system is flexible and might tune to distinct strategies as a function of visual and social experience.
Meeting abstract presented at VSS 2017