Abstract
Gaze fixations are used to monitor tasks and guide hand movements. Simple tasks have been studied extensively, and some complex tasks have been studied in 2D environments (e.g., Ballard, et al. 1992). Much remains to be learned about complex interactions in the natural world. How is gaze distributed to support motor movement and information gathering in complex interactions? We designed a sorting task with 2-dimensional printed imagery and monitored gaze and grasp of participants to explore these questions and understand how gaze and grasp interact when a sorting task requires information gathering, manual interaction, and placement. Using a head-mounted Pupil Labs eye tracker, which recorded their eyes at 120 Hz and the scene at 60 Hz and a custom-designed system to monitor their grasp, we designed a novel system which used a template of the objects, and mapped the fixations and grasp data onto the templates. The eye data-processing procedure starts with finding the fixations in the scene, then detecting if the fixation is on an object, and identifying that object from among the templates. Once the object is identified, the fixation is projected onto the object’s template. A similar procedure is followed for processing the hand data. The hand is located, followed by detecting and identifying the object, and finally the hand is mapped onto the object’s template. Monitoring gaze and grasp concurrently, we observed that during the interaction, eye movements are made for both seeking information and guiding motor movements. This now allows us to perform finer spatio-temporal analyses to understand eye and hand coordination in complex interactions. We report results in the sorting task.