Abstract
The visual system can recover scene-relative object motion by removing the self-motion component from the object's retinal motion, a process called flow parsing. However, little is known about the temporal dynamics of flow parsing. Here we addressed this question by examining how flow parsing gain (i.e., the proportion of self-motion component subtracted from the object's retinal motion) is modulated by the exposure time to optic flow. A stereo display simulated forward self-motion at 0.30m/s through a cloud of 58 red wireframe objects (depth range: 0.34m) with a yellow dot probe (diameter: 0.0025º) moving vertically in the scene. In the full-field condition, objects appeared on the entire image plane (56ºx33º), thus providing both global and local motion information around the probe. In the hemi-field condition, objects were placed on the opposite side of the probe on the image plane, thus removing local motion information around the probe. Five display durations (100ms, 200ms, 400ms, 700ms, & 1000ms) were tested. The midpoint where the probe was visible for each display duration corresponded to the same time point (900ms) in a 1000ms display, ensuring the same depth, eccentricity, and self-motion component of the probe in the cloud. A self-motion component was added to the probe's retinal motion using an adaptive staircase to determine when the probe was perceived to move vertically in the scene. Across 11 participants, the flow parsing gain remained unchanged across the five display durations for the full-field display, but decreased with duration for the hemi-field display, which was not due to the perceived heading bias as revealed by our control experiment. We conclude that both global and local motion affect the temporal dynamics of flow parsing. Flow parsing is a fast process (≤100 ms), and its accuracy appears to decrease with an increased exposure to global flow.
Meeting abstract presented at VSS 2017