Purchase this article with an account.
Stephanie Rossit; Does binocular vision drive the lower visual field advantage for grasping? . Journal of Vision 2014;14(10):420. doi: 10.1167/14.10.420.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Humans achieve better performance when reaching and grasping stimuli positioned in the lower than in the upper visual field. Moreover, the brain regions involved in visuomotor control also show a lower visual field preference for hand actions (e.g., Rossit et al., 2013). The current study investigated whether the lower visual field advantage for grasping is related to the availability of binocular cues. Right-handed participants were asked to fixate on one of two light-emitting diodes such that objects could appear in either the upper or lower right visual fields. While keeping fixation, they were required to reach out and grasp objects under conditions of either monocular or binocular vision. Grasping movements were performed towards self-illuminated objects in open-loop and simultaneously with a fixation task. In line with previous studies, the analysis of kinematic parameters revealed that under monocular viewing, grip apertures were larger than under binocular viewing. Moreover under both binocular and monocular viewing, grip apertures were less variable when objects were viewed in the lower as opposed to the upper visual field. In addition, under binocular viewing there was a stronger relationship between object size and grip aperture when objects were presented in the lower visual field as compared to in the upper visual field, whereas no visual field effect was observed in the monocular condition. These results indicate that binocular cues may play an important role in the lower visual field advantage for grasping. In particular, the availability of binocular cues may allow better programming of grip scaling specifically towards objects in the lower visual field, reflecting the fact than in our everyday lives this is the region of space where we mostly interact with objects.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only