Abstract
Tracking a target with the hand is an alternative or enhancement to traditional push-button psychophysics. Here, observers were instructed to mirror the motion of a purely cyclopean disparity-defined target in a dynamic random element stereogram (DRES) using their index finger. During each 10 sec trial, the target moved in a 3-dimensional random walk, but it remained solely in either crossed or uncrossed disparity (i.e. either in front of or behind the larger surface). We did not inquire about the color of the DRES. The three-dimensional finger position was monitored in real-time (60Hz) using a Leap Motion controller. The data were analyzed by cross-correlating the target and finger velocities for each of the three cardinal motion directions. Surprisingly, tracking in horizontal and vertical directions (i.e. frontoparallel motion) was better than depth tracking ("better" = higher peak correlations and shorter latencies), even though disparity processing was required to see and therefore track the target at all. More surprisingly, tracking performance for crossed targets was markedly better than for uncrossed targets, despite disparity sign being the only difference. When static and centered, the crossed-disparity target was consistent with an object floating before a background, and somewhat consistent with the end of a pillar projecting from the background. The uncrossed disparity target was consistent with a surface viewed through an aperture, and somewhat consistent with the base of a tunnel. However, when the target began to move, only the first crossed disparity interpretation (an object floating) was plausible (pillars and tunnels have sides which should become more clearly visible when relative target position changes; apertures are generally in a fixed position on a surface). Thus, while the cause of the performance asymmetry is yet uncertain, we speculate that it was because moving crossed-disparity targets in DRES had a single plausible, figure-ground interpretation.
Meeting abstract presented at VSS 2016