Abstract
Humans frequently rely on information from across the visual field for navigation and object manipulation. Information from the periphery is useful for planning saccades and goal-directed movements prior to visual fixation and for performing tasks without direct fixation such as adjusting the radio while keeping one's eyes on the road. We performed three experiments that examined how we use aspect ratio, a monocular cue, and horizontal disparity, a binocular cue, to estimate 3D orientation when grasping targets at various retinal eccentricities and depths relative to fixation. We measured subjects' 3D orientation thresholds separately for these cues at 0°, 7.5°, and 15° of retinal eccentricity and then compared the predictions of an optimal Bayesian cue integrator based on these thresholds to how subjects used these cues to perform an object prehension task with targets at different retinal eccentricities. We then quantified how subjects integrated the cues for grasping targets up to 1° of horizontal disparity from the theoretical horopter. Thresholds increased for both cues as the retinal eccentricities of the targets increased. In the grasping task, we found that subjects relied equally on the cues for targets under visual fixation but that they relied more on monocular information with increasing retinal eccentricity; at 15° of retinal eccentricity, aspect ratio influenced grasp orientations five times more than horizontal disparity. These results matched the predictions of the Bayesian integrator. Similarly, when subjects grasped targets at different depths from the fixation point, they relied more on aspect ratio as the distance of the targets from the horopter increased; at 1° from the horopter, subjects' orientation estimates were based entirely on monocular information. Our results showed that how information across the visual field influences visually-guided movements depends on its reliability relative to information from other cues and positions across the visual field.