October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Semantic knowledge guides attention in real-world scenes
Author Affiliations & Notes
  • Taylor R. Hayes
    University of California, Davis
  • John M. Henderson
    University of California, Davis
  • Footnotes
    Acknowledgements  Supported by the National Institutes of Health (NEI) under award number R01EY027792.
Journal of Vision October 2020, Vol.20, 583. doi:https://doi.org/10.1167/jov.20.11.583
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Taylor R. Hayes, John M. Henderson; Semantic knowledge guides attention in real-world scenes. Journal of Vision 2020;20(11):583. doi: https://doi.org/10.1167/jov.20.11.583.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Stored semantic knowledge gained through experience is theorized to play a critical role in determining the attentional priority of objects in real-world scenes. However, the link between semantic knowledge and attention is largely unknown due to the difficulty of quantifying semantics. The present study tested the link between stored semantic knowledge and scene attention by combining vector-space models of word semantics derived from how we use words in written text and crowd-sourced knowledge about the world with eye movements in real-world scenes. Within this approach, the vector-space model of word semantics (i.e., ConceptNet Numberbatch; Speer, Chin, & Havasi, 2016) served as a proxy for stored semantic knowledge gained from experience, and eye movements served as an index of attentional priority in scenes. Participants (N=100) viewed 100 real-world scenes for 12 seconds each while performing memorization and aesthetic judgment tasks. A representation of the spatial distribution of object semantics in each scene was built by segmenting and labeling all objects, computing the mean cosine similarity between each object and the other objects in that scene using ConceptNet, and then adding the mean object similarity values for the locations that objects occupied within the scene. We then applied a logistic general linear mixed effects model to examine how a scene region’s semantic value was related to its likelihood of being fixated with subject and scene as random effects. The results showed that the higher the semantic value of a scene region, the more likely that region was to be fixated. These findings help bridge the gap between the theorized role of stored semantic knowledge and attentional control during scene viewing and also highlight the usefulness of models of word semantics to test theories of scene attention.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×