Purchase this article with an account.
Luca Vizioli, Sebastien Miellet, Roberto Caldara; Tracking qualitative and quantitative information use during face recognition with a dynamic Spotlight. Journal of Vision 2010;10(7):612. doi: https://doi.org/10.1167/10.7.612.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Social experience and cultural factors shape the strategies used to extract information from faces. These external forces however do not modulate information use. Using a gaze-contingent technique that restricts information outside the fovea - the Spotlight - we recently showed that humans rely on identical face information (i.e., the eye and mouth regions) to achieve human face recognition (Caldara, Zhou and Miellet, 2010). Although the Spotlight allows precise identification of the diagnostic information required for face processing (i.e., qualitative information), the amount of information (i.e., quantitative information) necessary to effectively code facial features is still unknown. To address this issue, we monitored the eye movements of observers during a face recognition task with a novel technique that parametrically and dynamically restricts information outside central vision. We used Spotlights with Gaussian apertures centered on the observers' fixations that dynamically and progressively expanded (at a rate of 1° every 25ms) as a function of fixation time, Thus, the longer the fixation duration, the larger the Spotlight aperture size. The Spotlight aperture was contracted to 2° (foveal region) at each new fixation. To facilitate the programming of saccades and natural fixation sequences, we replaced information outside central vision with an average face template. This novel technique allowed us to simultaneously identify the active use of information, and provide an estimate of the quantity of information necessary at each fixation location to achieve this process. The dynamic Spotlight technique revealed modulations in the quantity of information extracted from diagnostic features, even for the same facial features (i.e., the eyes). This sensitivity varied across observers. Our data suggest that the face system is not uniformly tuned for facial features, but rather that the calibration modulating the intake of visual information is observer-specific.
This PDF is available to Subscribers Only