Abstract
When observers move, optic flow, which is generated by motion, and image structure, which is projected from world surfaces, become available. Optic flow specifies spatial relations and calibrates image structure; calibrated image structure preserves spatial relations specified by optic flow after motion stops. Interacting optic flow and image structure enable stationary blurry-vision observers to perceive events when they view objects in motion (Pan et al, 2017). Similarly, when a blurry–vision observer locomotes, optic flow and blurry image structure should allow her to perceive surrounding stationary scenes. Furthermore, a locomoting observer simultaneously experiences translational optic flow generated by locomotion and rotational optic flow generated by eye movement. Because translational flow specifies depth layout and rotational flow does not, blurry scenes should be perceptible with translational flow but not with rotational flow. However, when both flows are present, is the resultant flow compromised in its power to specify spatial layout because rotational flow serves as noise to the system; or is the resultant flow as powerful as translational flow in specifying spatial layout because the flows work in a winner-take-all fashion? We studied these questions in three experiments, where participants identified scenes from blurry static images and from blurry videos, which contained translational flow (Experiment 1), rotational flow (Experiment 2) or both (Experiment 3). When first viewing blurry images, participants did not identify the scenes. When viewing blurry videos, scenes were perceived with translational flow, but were not with rotational flow. Scenes were equally accurately perceived when both flows existed, suggesting that the two flows work in a winner-take-all fashion. One week after viewing the blurry videos, participants successfully perceived scenes from the static blurry images. Therefore, regardless of rotational flow, as long as translational flow is available, it interacts with blurry image structure to yield accurate and stable scene perception.
Meeting abstract presented at VSS 2018