May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Eye can read your mind: Decoding eye movements to reveal the targets of categorical search tasks
Author Affiliations
  • Gregory Zelinsky
    Psychology Department, Stony Brook University, and Computer Science Department, Stony Brook University
  • Wei Zhang
    Computer Science Department, Stony Brook University, NY, USA
  • Dimitris Samaras
    Computer Science Department, Stony Brook University, NY, USA
Journal of Vision May 2008, Vol.8, 380. doi:https://doi.org/10.1167/8.6.380
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gregory Zelinsky, Wei Zhang, Dimitris Samaras; Eye can read your mind: Decoding eye movements to reveal the targets of categorical search tasks. Journal of Vision 2008;8(6):380. https://doi.org/10.1167/8.6.380.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Theories of top-down search guidance typically assume that guidance to a distractor is proportional to the object's similarity to a target. This relationship, however, has been demonstrated only for simple patterns; it is less clear whether it holds for realistic objects. We report a novel method for quantifying guidance by reading the subject's mind, defined here as classifying the target of a categorical search task (either teddy-bears or butterflies) based on the distractors fixated on target-absent trials. The task was standard present/absent search. Half of the subjects searched for a teddy-bear target, the other half searched for a butterfly target. Except for the targets, search displays were identical between the two groups, meaning the same distractors in the same locations. All distractors were random real-world objects selected from the Hemera collection. To quantify target-distractor similarity we used a machine learning method (AdaBoost) and new target exemplars to train a teddy-bear/butterfly classifier. Target-absent trials were then combined across the teddy-bear and butterfly groups, and the distractors selected by gaze on these trials were identified and input to the classifier. The classifier evaluated these objects in terms of color, local texture, and global shape feature similarity to the teddy-bear and butterfly classes, then assigned each object to one of these target categories. Our joint behavioral-computational method correctly classified 76% of the actual butterfly target-absent searches and 66% of the teddy-bear target-absent searches, lower than the butterfly classification rate but still significantly better than chance (50%). These results definitively prove the existence of categorical search guidance to real-world distractors; in the absence of guidance above-chance classification would not have been possible. Our method also demonstrates that these guidance signals are expressed in fixation preferences, and are large enough to read a subject's mind to decipher the target category of target-absent searches.

Zelinsky, G. Zhang, W. Samaras, D. (2008). Eye can read your mind: Decoding eye movements to reveal the targets of categorical search tasks [Abstract]. Journal of Vision, 8(6):380, 380a, http://journalofvision.org/8/6/380/, doi:10.1167/8.6.380. [CrossRef]
Footnotes
 This work was supported by NIH grant R01-MH63748.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×