Abstract
The visual system uses interocular velocity differences (IOVDs) to compute three-dimensional (3D) motion. In most models, the monocular 2D motion signals are extracted and combined at the early stage of visual processing (i.e., V1) where eye-of-origin information is still available. However, we have demonstrated that observers can use eye-specific 2D motion information to judge 3D motion direction, suggesting that eye-specific information might be available for 3D motion computation at later stages of visual motion processing (Rokers et. al., JOV, 2011). Stimuli consisted of 60 small (~1°) drifting Gabors displayed inside an annulus (3° - 7° eccentricity). In Experiment 1, we measured 2D MAEs with test stimuli presented in unadapted locations. We predicted and confirmed that there would be little, if any, 2D MAEs because the receptive field (RF) size of V1 neurons is small and MT inherits direction-selectivity from V1. In Experiment 2, we measured 3D MAEs in which adapting stimuli were Gabors drifting in opposite directions in the same retinal locations between eyes. Test stimuli were the same as adapting stimuli except presented in unadapted locations. We found robust 3D MAEs, suggesting that 3D MAEs are not due to by local motion processing that presumably occurs in V1 and 3D direction selectivity arises at later stages of visual processing with larger RFs. In Experiment 3, we used a stimulus configuration where Gabors in each eye did not share the same retinal location. We found strong 3D MAEs, suggesting eye-specific 2D motion information is preserved at later stages of visual processing to compute 3D motion. Our results demonstrate that the locus of 3D motion computation might be late in the visual hierarchy, at least after V1, and unlike previous views, eye-specific motion information is available for 3D motion computation at later stages of visual processing.
Meeting abstract presented at VSS 2016