Abstract
Introduction: Gaze, head, and body postures are socially important cues to orient attention (Driver et al., 1999; Friesen et al., 2004) but have mainly been studied with static images (Azarian et al., 2017; Bayliss et al., 2004). Here, we investigate eye movements during dynamic gaze-following with videos and show a tight relationship of the observers’ eye movements with the gazer’s head velocity and peripheral gazed-target information. Methods: Twenty-five subjects observed 160 videos (duration 1.2s) of an individual directing their gaze to a target person (25% trials), a distractor person (25% trials), or an empty space (50% trials). Subjects were instructed to follow the gaze direction to detect the target person (yes/no task). Eye position was recorded using an Eyelink 1000. We used a pre-trained CNN model (Chong et al., 2020) to obtain objective estimates of each video frame's gaze direction and gaze point. Results: Saccade endpoint distance to the gazed location was lower when the target trials vs. absent or distractor trials, showing how observers integrate foveal gaze information and peripheral target information to guide saccades. Critically, we also observed that in 24% of trials, observers initiated a reverse saccade to the gazer after executing forward saccades that followed the gaze direction. Reverse saccade occurred on trials with significantly lower head velocity (0.06 deg/s vs. 0.11 deg/s) during the first 250ms of the video, t(24)= 10.9, p<0.001, compared to trials with no reverse saccade. The saccade endpoint error (relative to gazed person’s head) following the reverse saccade was significantly reduced from 3.5˚ to 2.9˚, p<0.001, showing the functional role of reverse saccades. Conclusion: The findings show how eye movements during gaze following are tightly coupled to the dynamics of the gazer’s head movement and peripheral information about likely gazed targets.