October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Bayesian model of human visual search in natural images
Author Affiliations
  • Gaston Bujia
    Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
    Instituto del Cálculo, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
  • Melanie Sclar
    Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
  • Sebastian Vita
    Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
  • Guillermo Solovey
    Instituto del Cálculo, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
  • Juan KamienkowskI
    Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires - Consejo Nacional de Investigaciones en Ciencia y Técnica, Argentina.
    Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina.
Journal of Vision October 2020, Vol.20, 1596. doi:https://doi.org/10.1167/jov.20.11.1596
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gaston Bujia, Melanie Sclar, Sebastian Vita, Guillermo Solovey, Juan KamienkowskI; Bayesian model of human visual search in natural images. Journal of Vision 2020;20(11):1596. doi: https://doi.org/10.1167/jov.20.11.1596.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ability to efficiently find objects in the visual field is essential for almost any everyday visual activities. In the last decades, there was a large development of models that accurately predict the most likely fixation locations (saliency maps), although it seems to have reached a plateau. Today, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to multiple visual tasks. Particularly, in visual search task, Bayesian observers have been proposed to model the visual search behavior as an active sampling process. In this process, during each fixation, humans incorporate new information and update the probability of finding a target at every location. Here, we combine these approaches for visual search in natural images and proposed a model to predict the whole scanpath. Our Bayesian Searcher (BS) uses a saliency map as prior and computes the most likely next location given all the previous fixations, considering visual properties of the target and the scene. We collected eye-movement visual search data (N=57) in 134 natural indoor scenes and compare different variants of the model and its parameters. First, considering only the third fixation of each scanpath, we compared different state-of-the-art saliency maps on our dataset, reaching similar AUC performances as in other datasets. But, after the third fixation, all performances diminished to almost chance level. This suggests that saliency maps alone are not enough when top-down task’s information is critical. Second and more strikingly, when comparing BS models, their behavior was indistinguishable from humans for all fixations, both in the percentage of target found as a function of the fixation rank and the scanpath similarity, reproducing the entire sequence of eye movements.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×