Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Gaze patterns modeled with a LLM can be used to classify autistic vs. non-autistic viewers
Author Affiliations
  • Amanda J Haskins
    Dartmouth College
  • Thomas L Botch
    Dartmouth College
  • Brenda D Garcia
    Dartmouth College
  • Jennifer McLaren
    Dartmouth Hitchcock Medical Center
  • Caroline E Robertson
    Dartmouth College
Journal of Vision September 2024, Vol.24, 1190. doi:https://doi.org/10.1167/jov.24.10.1190
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Amanda J Haskins, Thomas L Botch, Brenda D Garcia, Jennifer McLaren, Caroline E Robertson; Gaze patterns modeled with a LLM can be used to classify autistic vs. non-autistic viewers. Journal of Vision 2024;24(10):1190. https://doi.org/10.1167/jov.24.10.1190.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Atypical visual attention is a promising marker of autism spectrum conditions (ASC). Yet, it remains unclear what mental processes guide individual and group-level gaze differences in autism. This is in part because eyetracking analyses have focused on properties of external visual stimuli (e.g., object categories) and failed to investigate a key influence on gaze: the viewer’s own internal conceptual priorities. Disambiguating these influences is crucial for advancing gaze as an endophenotype for autism. Here, we tested the hypothesis that gaze differences in autism stem from abstract conceptual-level information, rather than object categorical information. Adult participants (N = 40; 20 ASC) viewed real-world photospheres (N = 60) in VR. We characterized conceptual-level scene information using human captions, which we transformed into sentence-level embeddings using a large language model (BERT). For each participant, we obtained a “conceptual gaze model”: the linear relationship between each participant’s gaze and conceptual features (BERT; dimensionality reduced using PCA). To compare the influence of internal, conceptual-level information (“for sale”, “sports fan”) with external, image-based properties (“hat”), we also modeled gaze patterns using a vision model with comparable transformer architecture (ViT). Using a support vector machine (SVM) iteratively trained to classify participant pairs using conceptual gaze models, we find that individual classification for both autistic and non-autistic participants significantly exceeds chance (62% overall, p < 0.001); moreover, individual classification for conceptual gaze models is higher than classification for visual categorical models (t(39) = 4.9, p < 0.001). Next, using a binary SVM to evaluate group-level differences in autistic gaze patterns, we found higher group classification accuracy for left-out participants when training the SVM on conceptual vs. categorical gaze models (t(399) = 3.88, p < 0.001). These results suggest that gaze differences are reliable within autistic individuals, and that group-level gaze differences are particularly driven by conceptual-level informational priorities.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×