Abstract
Gait introduces a complex pattern of visual motion on the retina (Matthis et al, 2021) that depends on the location of gaze in the scene. This makes it difficult to predict the optic flow patterns in natural settings such as crossing a street, where moving objects like pedestrians and cars add to the self-generated retinal motion. To make navigation decisions, the walker must be able to parse the signal to distinguish between self and external motion signals. To measure retinal flow patterns, we tracked gaze while walking across an intersection. As observed previously, gaze patterns consist of fixations that are largely stable in the scene, separated by saccades as the walker moves forward in the direction of travel. After identifying saccades and extracting them, we transformed the head camera images into retinocentric coordinates and used optic flow estimation to approximate the flow vectors across the retina during a fixation within a region of approximately 90 degrees diameter. Preliminary observations suggest the range of retinal velocities generated by self-motion were easily discriminable from the velocities generated from objects such as pedestrians, especially as they approached the subject. Motion patterns generated while looking in the approximate direction of heading are low near the fovea, increasing to 10 degrees/second on the horizon in the periphery, and up to 18 degrees/second on the ground plane near to the subject. In contrast, a pedestrian at 2 meters distance walking away from the subject at 20 degrees in the periphery generated a local motion signal averaging around 15 degrees/second. This decreases to zero when the subject fixates the pedestrian, while peripheral motion climbs to 20 degrees/second. These signals are easily segmented. Similarly, a car passing laterally through the subject's visual field at 10 meters distance and 25 mph produces velocities averaging 30 meters/sec.