Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Deep Learning and visual search: Using raw eye movement data, convolutional neural networks generate target-location predictions in line with experimental manipulations
Author Affiliations & Notes
  • Nicholas Crotty
    Trinity College
  • Nicole Massa
    Massachusetts General Hospital
  • Michael Grubb
    Trinity College
  • Footnotes
    Acknowledgements  NSF-2141860 CAREER Award to Michael Grubb
Journal of Vision September 2024, Vol.24, 951. doi:https://doi.org/10.1167/jov.24.10.951
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas Crotty, Nicole Massa, Michael Grubb; Deep Learning and visual search: Using raw eye movement data, convolutional neural networks generate target-location predictions in line with experimental manipulations. Journal of Vision 2024;24(10):951. https://doi.org/10.1167/jov.24.10.951.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye tracking during visual search generates spatiotemporally rich but complex data. Traditional analysis methods typically employ metrics like the proportion of trials containing a saccade to a target (or distractor), dwell time on important stimuli, etc. However, such approaches leave out potential information contained in the raw eye data. Here, we asked if deep learning advancements might aid scientists in navigating this trade-off. A convolutional neural network (CNN) is a type of artificial neural network that can identify key features of input data, then use these features to sort (classify) unlabeled inputs into their appropriate groups. CNNs can learn from this classification process, using mistakes to help further define the portions of data that are most informative in determining what group an input belongs to. Although CNNs are commonly applied to images, they can generate predictions from other complex inputs like timeseries data. In a pre-existing dataset, participants searched for a color-defined target amongst 5 differently-colored distractors. We built a CNN that receives the raw x,y timeseries data and predicts which of the six locations contained the target on each trial. We trained the CNN on 2/3 of the data and validated on the rest. In short, the CNN performed well, predicting target location substantially above chance (67% vs. 17%). In our study, participants were pre-cued with reliable information about target color on half the trials (validity: 100%) and with unreliable information on the other half (validity: 50%). Prediction accuracies of two new CNNs, trained and validated on data from the two precue conditions separately, reflected this experimental intervention: greater classification accuracy in reliable (70%) than unreliable trials (63%). Bootstrapped error bars and subject-level null hypothesis testing confirmed the statistical reliability of this difference. These findings highlight the potential of CNNs as a novel analysis method.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×