December 2010
Volume 10, Issue 15
Free
OSA Fall Vision Meeting Abstract  |   December 2010
Efficient tagging of visual fixations from mobile eye trackers using hierarchical clustering for batch processing
Author Affiliations
  • Thomas Kinsman
    Multidisciplinary Vision Research Laboratory, Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
  • Peter Bajorski
    Center for Quality and Applied Statistics and Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
  • Jeff B. Pelz
    Multidisciplinary Vision Research Laboratory, Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
Journal of Vision December 2010, Vol.10, 43. doi:https://doi.org/10.1167/10.15.43
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thomas Kinsman, Peter Bajorski, Jeff B. Pelz; Efficient tagging of visual fixations from mobile eye trackers using hierarchical clustering for batch processing. Journal of Vision 2010;10(15):43. https://doi.org/10.1167/10.15.43.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose. Large mobile eye-tracking studies result in extensive amounts of data. [6, 7] There is a fundamental requirement to have an experimenter examine and classify each fixation. Processing time constrains the scope of eye-tracking studies. [2, 3] The goal: reduce data handling time using pattern recognition.

Methods. Use models of human vision to help analyze human vision! Clusters of objects are recognized using simple features extracted from fixation images. Initial features were color histograms [1, 8], but modified to use univariate features. Previous methods required the solution of the Simplex method, circumvented here. High-level cluster analysis performed using the EMD [1, 9] for a distance metric between images, and Ward's procedure for linkage. Sub-Cluster analysis is performed on the resulting clusters using a disjoint feature set, allowing local adaptation. This sub-cluster analysis is mapped to a serpentine curve (a Hilbert-like curve) to preserve spatial coherence.

Results. The EMD has been found to be a good model of visual similarity. Clusters were found to be pure 25% of the time! Roughly 75% of the time clusters containing 3 or fewer classes, accommodating human visual classification abilities. [4] Sub-cluster analysis successfully presented the images with good spatial coherence. Future work will involve improved algorithms, automatic feature selection, and adaptive recognition.

Conclusions. Ward's method obviates the need for sophisticated feature matching. Clusters of fixation-images can be identified and presented to a user as a coherent group. The recognition of multiple similar fixation images reduces the workload of the experimenter by allowing entire clusters of fixations to be classified simultaneously, resulting in a 35:1 efficiency improvement.

Acknowledgments
The authors wish to acknowledge the support of Procter & Gamble, and Susan Munn for the use of her data. 
We also thank Dr. Carol Romanowski for her gracious help and expertise quantifying data preparation time estimates. 
References
Rubner, Y.(1999). Perceptual metrics for image database navigation. Ph.D. Thesis, Stanford University, May 1999.
Edelstein, H.(1999). Introduction to data mining and knowledge discovery (p. 26, 3rd ed.). Potomac, MD: Two Crows Corporation.
Devlin, K.(2001). The math gene, how mathematical thinking evolved and why numbers are like gossip, basic books (pp. 42–44). Perseus Books Group.
Salvucci, D. D., Goldberg, J. H.(2000). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (Palm Beach Gardens, Florida, United States, November 06–08, 2000) (pp. 71–78). New York, NY: ETRA '00. ACM.
Munn, S. M., Pelz, J. B.(2009). FixTag: An algorithm for identifying and tagging fixations to simplify the analysis of data collected by portable eye trackers. ACM Trans. Appl. Percept., 6(3), 1–25. [CrossRef]
Pontillo, D., Kinsman, T. B., Pelz, J.(2010). Semanticode: using content similarity and database-driven matching to code wearable eyetracker gaze data. Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 267–270). Austin, Texas.
Swain, M. J., Ballard, D.(1991). Color indexing. International Journal of Computer Vision, 7(1), 11–32. [CrossRef]
Deza, M. M., Deza, E.(2009). Encyclopedia of Distances. Berlin Heidelberg: Springer-Verlag.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×