Abstract
We have recently developed a neural model for coding 3D motion direction in primate area MT. By incorporating the geometry of retinal projection, it encodes motion direction with a bank of strikingly non-Gaussian tuning functions. The model makes surprising predictions about how performance should change as a function of stimulus location (i.e. across viewing distance and eccentricity). In this work, we used a motion direction estimation task to test these predictions. We manipulated viewing distance (20cm, 31cm, or 67cm) across blocks of trials. In order to manipulate viewing distance precisely at such short distances, we built a rear-projection system mounted on rails (ProPixx 3D projector; Screen Tech ST-PRO-DCF) that can be easily adjusted for viewing distances from 20cm to 270cm with a head-fixed subject. During each trial (1s), a spherical volume of low-contrast light and dark dot stimuli were rendered with full stereoscopic cues (disparity, expansion, and size-change) moving at one of three speeds (5cm/s, 7.75cm/s, or 16.75cm/s). The stimulus volume was three-dimensionally scaled for each viewing distance to maintain a consistent 5° visual angle (1.75cm, 2.70cm, 5.85cm diameter, respectively). Subjects reported the perceived 3D direction of motion using a physical knob to adjust the angle of a stereoscopic response arrow also rendered in the virtual 3D space. Direction estimation error varied sinusoidally as a function of motion direction, consistent with a frontoparallel motion bias. Crucially—and as predicted by the model—subjects often confused the sign of the z-axis (depth) component of the 3D motion direction, and this effect increased with increased viewing distance. Taken together, these results support the notion that that 3D motion perception performance is dependent on motion direction, viewing distance, and environmental speed as predicted by our model of encoding in MT.
Meeting abstract presented at VSS 2018