September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Novel offline technique to process and understand interaction with printed imagery
Author Affiliations & Notes
  • Anjali K Jogeshwar
    Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology
  • Gabriel J. Diaz
    Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology
  • Jeff B. Pelz
    Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology
Journal of Vision September 2019, Vol.19, 146b. doi:https://doi.org/10.1167/19.10.146b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anjali K Jogeshwar, Gabriel J. Diaz, Jeff B. Pelz; Novel offline technique to process and understand interaction with printed imagery. Journal of Vision 2019;19(10):146b. doi: https://doi.org/10.1167/19.10.146b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Gaze fixations are used to monitor tasks and guide hand movements. Simple tasks have been studied extensively, and some complex tasks have been studied in 2D environments (e.g., Ballard, et al. 1992). Much remains to be learned about complex interactions in the natural world. How is gaze distributed to support motor movement and information gathering in complex interactions? We designed a sorting task with 2-dimensional printed imagery and monitored gaze and grasp of participants to explore these questions and understand how gaze and grasp interact when a sorting task requires information gathering, manual interaction, and placement. Using a head-mounted Pupil Labs eye tracker, which recorded their eyes at 120 Hz and the scene at 60 Hz and a custom-designed system to monitor their grasp, we designed a novel system which used a template of the objects, and mapped the fixations and grasp data onto the templates. The eye data-processing procedure starts with finding the fixations in the scene, then detecting if the fixation is on an object, and identifying that object from among the templates. Once the object is identified, the fixation is projected onto the object’s template. A similar procedure is followed for processing the hand data. The hand is located, followed by detecting and identifying the object, and finally the hand is mapped onto the object’s template. Monitoring gaze and grasp concurrently, we observed that during the interaction, eye movements are made for both seeking information and guiding motor movements. This now allows us to perform finer spatio-temporal analyses to understand eye and hand coordination in complex interactions. We report results in the sorting task.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×