Purchase this article with an account.
Dave Gonzalez, Ewa Niechwiej-Szwedo; Sequential Movements: When does Binocular Vision Facilitate Object Grasping and Placing. Journal of Vision 2015;15(12):1145. doi: 10.1167/15.12.1145.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Vision provides a rich source of spatial and temporal information about the environment and one’s own actions, which is used to plan and execute upper limb movements. Previous research has shown that viewing with both eyes provides a greater advantage during the grasping phase in comparison to the reaching phase. However, most studies examined performance using a single reach-to-grasp movement. Since most of our daily activities involve sequential manipulation actions, it is important to examine hand-eye coordination during performance of these more complex actions. Therefore, we explored the role of binocular vision in a sequential task that involved precision grasping and placing a target onto a vertical needle. Six participants picked up and placed 6 beads (one at a time) onto a needle under binocular and monocular viewing conditions while eye and limb movements were recorded. The difficulty of the grasping task was manipulated by using 2 bead sizes and the kinematic analysis focused on 4 phases of the movement: approach to the bead, bead grasping, return to needle and bead placement on the needle. Therefore, our analysis allows us to delineate which component of the task (reaching for and grasping the bead vs transporting and placing the bead) benefits more from binocular vision. We found that binocular vision was most beneficial after the bead has been grasped. Movement times during the return and placement phase were significantly reduced during binocular viewing (0.6s, SE = .055s) in comparison to monocular viewing (left eye: 0.997s, SE = 0.106s; right eye: 1.136s, SE = 0.119s; p< 0.01). These results indicate that placing the bead onto a needle requires a higher level of precision and thus requires binocular visual input in comparison to the grasping phase. Further analysis will concentrate on quantifying the temporal relation between the hands and eyes during task execution.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only