September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Predictions Guide Gaze in Scene Search
Author Affiliations
  • Steven Luke
    Department of Psychology, Brigham Young UniversityNeuroscience Center, Brigham Young University
  • Benjamin Jafek
    Department of Computer Science, Brigham Young University
Journal of Vision September 2018, Vol.18, 240. doi:https://doi.org/10.1167/18.10.240
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Steven Luke, Benjamin Jafek; Predictions Guide Gaze in Scene Search. Journal of Vision 2018;18(10):240. https://doi.org/10.1167/18.10.240.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Traditionally, models have focused on the role of visual salience in directing attention during real-world scene processing. However, recent research has suggested that meaningfulness plays a primary role, and specifically that eye gaze is guided by predictions (Henderson, 2016; Henderson & Hayes, 2017). We quantified predictability of search targets using a norming study in which participants were presented with scenes from the SCEGRAM image database (Öhlschläger & Võ, 2017). These scenes did not contain the search target, and participants indicated via mouse click where a given target would likely be located in the scene. Prediction maps were created from the data by applying a gaussian blur (sigma = 1 degree of visual angle). A separate group of participants then searched the scenes for these target objects while their eye movements were tracked. Fixation maps were produced from the eye-tracking data, specifically the location of the first fixation after the initial saccade from image center. Saliency maps werealso created for each image using graph-based visual saliency (Harel, Koch & Perona, 2006). Results indicate that the Prediction maps overlapped significantly with the Fixation maps when the target object was in or near the predicted location (r = 0.33). The Saliency and Fixation maps were more weakly related (r = 0.099). However, this Prediction map advantage disappeared when the target object was in an unusual location (e.g. the cereal bowl was on a chair instead of on the table; Prediction r = 0.095; Saliency r = 0.1). We also report the results of a deep neural network trained to use Predictability maps, saliency maps, and both together to predict eye fixation locations in an image. Together, these data indicate that prediction does guide gaze when peripheral visual information consistent with the prediction is available.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×