Purchase this article with an account.
Thomas Kinsman, Peter Bajorski, Jeff B. Pelz; Efficient tagging of visual fixations from mobile eye trackers using hierarchical clustering for batch processing. Journal of Vision 2010;10(15):43. doi: 10.1167/10.15.43.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose. Large mobile eye-tracking studies result in extensive amounts of data. [6, 7] There is a fundamental requirement to have an experimenter examine and classify each fixation. Processing time constrains the scope of eye-tracking studies. [2, 3] The goal: reduce data handling time using pattern recognition.
Methods. Use models of human vision to help analyze human vision! Clusters of objects are recognized using simple features extracted from fixation images. Initial features were color histograms [1, 8], but modified to use univariate features. Previous methods required the solution of the Simplex method, circumvented here. High-level cluster analysis performed using the EMD [1, 9] for a distance metric between images, and Ward's procedure for linkage. Sub-Cluster analysis is performed on the resulting clusters using a disjoint feature set, allowing local adaptation. This sub-cluster analysis is mapped to a serpentine curve (a Hilbert-like curve) to preserve spatial coherence.
Results. The EMD has been found to be a good model of visual similarity. Clusters were found to be pure 25% of the time! Roughly 75% of the time clusters containing 3 or fewer classes, accommodating human visual classification abilities.  Sub-cluster analysis successfully presented the images with good spatial coherence. Future work will involve improved algorithms, automatic feature selection, and adaptive recognition.
Conclusions. Ward's method obviates the need for sophisticated feature matching. Clusters of fixation-images can be identified and presented to a user as a coherent group. The recognition of multiple similar fixation images reduces the workload of the experimenter by allowing entire clusters of fixations to be classified simultaneously, resulting in a 35:1 efficiency improvement.
This PDF is available to Subscribers Only