September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Meaning guides clicks like fixations and drives memory for real-world scenes
Author Affiliations & Notes
  • Deborah Cronin
    University of California, Davis
  • John Henderson
    University of California, Davis
  • Footnotes
    Acknowledgements  This work was supported by the NEI of the NIH under award number R01EY027792.
Journal of Vision September 2021, Vol.21, 2823. doi:https://doi.org/10.1167/jov.21.9.2823
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Deborah Cronin, John Henderson; Meaning guides clicks like fixations and drives memory for real-world scenes. Journal of Vision 2021;21(9):2823. https://doi.org/10.1167/jov.21.9.2823.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye-movement data is useful for understanding visual and cognitive processes, but research-grade eye trackers can be cost-prohibitive. Further, the COVID-19 pandemic has made in-person eye-tracking studies impossible. In this study, we collected fixation-like data online by monitoring clicks on real world scene photographs using BubbleView (Kim, Bylinskii, et al., 2017). We sought to predict click locations using meaning maps and to examine the role of attention to meaning in long-term scene memory. [METHOD] Participants were presented with a blurred image for 12s. Whenever the participant clicked on the image, a region of the unmodified scene became visible around their click location. In three experiments, we manipulated the amount of time participants saw each scene before it was blurred (preview: 0, 50, and 200ms). Participants’ memory for the scenes was tested at the end of the experiment. [RESULTS] We compared participants’ click data from each experiment to previously-collected fixation data for the same scenes and found they were remarkably similar. Participants’ click maps correlated strongly with meaning maps (Henderson & Hayes, 2017) and participants were more likely to click high- than low-meaning scene regions, especially with longer scene previews. Finally, we found a significant relationship between participants’ clicks on meaningful scene regions and their performance on the memory test. Higher correlations between click maps and meaning maps and higher meaning values at click locations both predicted better memory. [DISCUSSION] Meaning-maps predicted click locations, suggesting click data can be used much like fixation data for studying scene-processing. This result also demonstrates meaning maps’ flexibility. We also found a relationship between attention to meaning and long-term memory for scenes, supporting the idea that meaning maps capture the distribution of semantic information in real-world scene images and suggesting that memory for scenes is fueled by semantic information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×