September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Role of memory in a Bayesian ideal observer model of visual search in natural images
Author Affiliations & Notes
  • Shima Rashidi
    The University of Melbourne, School of Computing and Information Systems
  • Krista A. Ehinger
    The University of Melbourne, School of Computing and Information Systems
  • Lars Kulik
    The University of Melbourne, School of Computing and Information Systems
  • Andrew Turpin
    The University of Melbourne, School of Computing and Information Systems
  • Footnotes
    Acknowledgements  S. R. is funded by the University of Melbourne, via Melbourne research scholarship (MRS).
Journal of Vision September 2021, Vol.21, 2450. doi:https://doi.org/10.1167/jov.21.9.2450
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shima Rashidi, Krista A. Ehinger, Lars Kulik, Andrew Turpin; Role of memory in a Bayesian ideal observer model of visual search in natural images. Journal of Vision 2021;21(9):2450. doi: https://doi.org/10.1167/jov.21.9.2450.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Given a model of target detectability over eccentricity across the visual field, the choice of next fixation is well predicted via a Bayesian ideal observer when the background is 1/f noise (Najemnik & Geisler, 2005). We have recently extended this work to real-world object targets embedded in natural image backgrounds (Rashidi, Ehinger, Turpin, & Kulik, 2020). The presented work studies whether adding short-term memory to the ideal observer model improves predictions of fixation patterns. We recorded eye movements from 5 observers searching for a person in 18 natural backgrounds. The target subtended 0.96 degrees visual angle and could appear at any of 84 possible locations (0-6.75 degrees eccentricity). We model the target detectability against each background using features extracted from deep convolutional neural networks, pooled over spatial regions increasing in size with eccentricity. We feed the calculated detectability maps to the ideal observer model and predict the fixation locations of the human observers. The model assumes that observers integrate information about target location over all previous fixations, so we represent memory span by limiting the integration to the most recent m fixations. We test m=2,4,6,8,10 and perfect memory. We find that the model with a memory span of 4 fixations best predicts human visual search performance (RMSE = 3.476). This improves on the original model which assumes no memory limitations (RMSE = 4.057, t(17) = 2.921, p = 0.0509). When considering only the backgrounds where the mean number of search fixations is greater than 4, RMSE reduces from 4.879 to 4.07 (t(11) = 2.214, p = 0.0488). This suggests that the visual system does not take the entire search history into account when selecting the next fixation location, as suggested in the Najemnik & Geisler model. Instead, the choice of next fixation is primarily based on the previous few fixations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×