Abstract
We extend our 2012 VSS study of tracking a single object that moves behind an occluding surface. Observers tapped the screen when cued by a sound to indicate where the moving object was at the time of the sound. We measured gaze and touch localization when the object was either occluded or visible, with or without landmarks. We found gaze and touch undershoot target position in occlusion trials especially later in tracking (>1.9 s), and that localization was not affected by landmarks. Here we test whether the lag-bias reflects coding of object position by the eye-movement system by comparing tracking with attention (gaze constrained) vs. free eye-movements, and whether landmarks help track hidden objects when gaze is constrained. Method: Seventy-five participants tracked a (1°) square that moved (4°/s) across marked, or unmarked (5-s) trials. Marks were a row of 10 asterisks along the object’s path. A sound probe (SP) occurred randomly between 1.3 and 3.8s after the square started moving. Location response was indicated by subjects tapping their forefinger to indicate object-position (hidden on 1/2 the trials). Thirty participants tracked the square with their gaze fixated on the center of the screen, and a 2nd group of 45 participants were free to track the object with their eyes.
Results. Best localization occurred when the tracked object is visible, and early in the tracking sequence (0.3° overshoot). In occlusion trials, tap lags behind the object (4.5°) and lag increases with tracking distance. Landmarks have a different effect depending on fixation and SP delay: When centrally fixated, landmarks help localize the hidden object (lag is reduced 0.8°), but hinders localization of the hidden object when gaze moves freely (reaching 8.0° at 3.8s). We discuss how eye-movements help localize hidden objects, but when gaze is constrained, attention uses landmarks to help guide tracking.
Meeting abstract presented at VSS 2013