December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Visual Attention during Scene Viewing – Eye Tracking Discovery with K-Means and Gaussian Mixture Model
Author Affiliations
  • Xinrui Jiang
    Datacubed Health
  • Melissa Beck
    Louisiana State University
Journal of Vision December 2022, Vol.22, 3631. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xinrui Jiang, Melissa Beck; Visual Attention during Scene Viewing – Eye Tracking Discovery with K-Means and Gaussian Mixture Model. Journal of Vision 2022;22(14):3631.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Nature scenes can generate cognitive (Berto, 2005) and affective (Bratman et al., 2015) benefits. Based on the Attention Restoration Theory (Kaplan, 1995, 2001), nature scenes attract attention effortlessly and allow the replenishment of directed attention. Researchers have tested the hypothesis that nature and urban scenes differ in visual processing, and found supporting evidence in basic eye movement parameters (e.g., fixation, saccade; Berto et al., 2008; Valtchanov & Ellard, 2015). The current investigation sought to apply k-means and Gaussian Mixture Models (GMM) to test this hypothesis and capture the spatial relationship of visual attention beyond basic eye movement parameters. Participants free viewed natural and urban scenes for five seconds. Free viewing could be followed by an attention check where participants reported which of the four scenes they had just viewed. Data from experiment 1 (n = 158) was first analyzed, followed by experiment 2 (n = 45) to confirm the results of experiment 1. Kmeans clustering was carried out using the KMeans function in the scikit-learn Python library. GMM analysis was carried out using the Eye Movement analysis with Hidden Markov Models (EMHMM) toolbox (Chuk et al., 2014). Paired t-tests revealed consistent findings between the experiments. Specifically, in comparison with urban scences, nature had fewer fixations, longer fixaiton durations, smaller saccade amplitudes, and fewer clusters in both kmeans and GMM models (p < .05). Additional analyses on Calinski Harabasz scores of kmeans models revealed lower scores in nature scenes. Consistent with previous literature (Berto et al., 2008; Valtchanov & Ellard, 2015), the visual processing of nature and urban scences do differ in fixations and saccades. We further confirmed that kmeans and GMM are capable of capturing this difference. Additional research is needed to compare kmeans and GMM in their abilities to assess the differences between nature and urban visual processing.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.