June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
Modeling eye-hand movement sequences in natural tasks
Author Affiliations
  • Weilie Yi
    Department of Computer Science, University of Rochester
  • Dana Ballard
    Department of Computer Science, University of Rochester, and Department of Brain and Cognitive Sciences, University of Rochester
  • Mary Hayhoe
    Department of Computer Science, University of Rochester, and Department of Brain and Cognitive Sciences, University of Rochester
Journal of Vision June 2006, Vol.6, 490. doi:https://doi.org/10.1167/6.6.490
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Weilie Yi, Dana Ballard, Mary Hayhoe; Modeling eye-hand movement sequences in natural tasks. Journal of Vision 2006;6(6):490. https://doi.org/10.1167/6.6.490.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 

We show that a Markov model captures the variance in eye and hand movement sequences in a natural task such as making a sandwich. Observing the different ways subjects perform the task allows the automatic decomposition into subtasks. The different ways of doing the task can then be described as alternate possible sequences of primitive operations, including eye movements and hand movements. Each such sequence fully characterizes a sandwich-making behavior. The transitional probabilities between subtasks are then computed from human data. The resultant model can produce new variations, which can be executed by a graphical human model in virtual reality with eye movements, body movements and object manipulation capability.

 

The model can explain almost all eye fixations observed in the course of sandwich-making, including anticipatory fixations to objects that are to be manipulated in the future. We explain such anticipatory fixations as initiated to update visual memory of objects relevant to future subtasks. The memory update facilitates upcoming visual search and visual guidance of hand movements. In this model, memory uncertainty initiates look-aheads probabilistically with other task specific parameters.

 

Experiments with human subjects making sandwiches show that the Markov model of subtask planning fits human data almost exactly. The mean number of anticipatory fixations in 10 trials averaged over 3 subjects was 3.10 with a s of 1.13 whereas the mean for the computer simulation was 3.09 with a s of 1.08. Thus anticipatory fixations can be seen as given advance notice of a visuo-motor plan.

 
Yi, W. Ballard, D. Hayhoe, M. (2006). Modeling eye-hand movement sequences in natural tasks [Abstract]. Journal of Vision, 6(6):490, 490a, http://journalofvision.org/6/6/490/, doi:10.1167/6.6.490. [CrossRef]
Footnotes
 This work was supported by NIH grants EY05729 and RR09283.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×