Abstract
Introduction: Previously we have shown that humans direct their initial eye movements during face identification to locations that optimize perceptual performance, both on the group and the individual level (Peterson & Eckstein, PNAS, 2012; Peterson & Eckstein, Psychological Science, in press). Here, we investigated whether this strategy, which is optimal for upright faces at common conversational distances, is maintained while identifying inverted and scaled faces. Methods: Observers completed three separate speeded (350 ms, time for one eye movement) tasks identifying faces embedded in white Gaussian noise (1 of 10 face identification). The first task asked observers to identify faces in an upright orientation and scaled to the size of a face at a normal conversational distance (6 degrees from the middle of the eyes to middle of the mouth). The second task had observers recognize the same faces while the images were rotated either 90° or 180° in plane. In the third task, observers identified the same faces at different scales, corresponding to looking at faces at different distances. Results: Observers displayed consistent initial eye movement patterns for upright faces, with an average landing position towards the vertical midline and displaced slightly downward from the eyes. This consistence was accompanied by inter-observer variability, with some observers fixating just below the eyes while a smaller subset looked further down towards the nose. Observers also displayed consistent and individualized first eye movements for horizontal, inverted, and scaled faces. Furthermore, these patterns were consistent for single observers across tasks: observers who looked close to the eyes on upright faces also looked close to the eyes on inverted and scaled faces. The results show that eye movement strategies that are optimized for common viewing conditions may be generalized to many less familiar situations.
Meeting abstract presented at VSS 2013