May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Predicting search efficiency with a low-level visual difference model
Author Affiliations
  • P. George Lovell
    Department of Experimental Psychology, University of Bristol
  • Iain D. Gilchrist
    Department of Experimental Psychology, University of Bristol
  • David J. Tolhurst
    Department of Physiology, Development and Neuroscience, University of Cambridge
  • Michelle To
    Department of Physiology, Development and Neuroscience, University of Cambridge
  • Tomasz Troscianko
    Department of Physiology, Development and Neuroscience, University of Cambridge
Journal of Vision May 2008, Vol.8, 1082. doi:10.1167/8.6.1082
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      P. George Lovell, Iain D. Gilchrist, David J. Tolhurst, Michelle To, Tomasz Troscianko; Predicting search efficiency with a low-level visual difference model. Journal of Vision 2008;8(6):1082. doi: 10.1167/8.6.1082.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Duncan and Humphreys (Psychological Review, 96(3), 1989) predicted that visual search efficiency would vary as a function of both target-distractor and distractor-distractor similarity. However, applying such concepts to search for targets in images containing complex, naturalistic, objects is made difficult because it is hard to quantify the degree of similarity (or difference) between elements of the image. Given that we now have metrics which predict image differences reasonably well (Visual Difference Predictors, or VDPs, Parraga et.al. Vision Research, 45, 2005), we wish to be able to use the output of these metrics so that they predict search performance in scenes containing natural objects. We thus generate search images (consisting of a target and distractors in discrete locations on a uniform background, c.f. traditional search experiments) in which increases in target-distractor similarity or in distractor-distractor heterogeneity should both result in decreased search efficiency. The current study examines observers' (n=5) visual search efficiency for natural objects while manipulating these factors whilst also manipulating display size. Observers were shown a new target for each block of trials. Observer reaction times were modeled with neural-networks, the inputs of which were the VDP's predictions of visual similarity. This resulted in reliable predictions of search efficiency. A post-hoc examination of the neural-net activation patterns enabled reconstruction of the original Duncan and Humphreys' prediction of search efficiency as a function of target-distractor and distractor-distractor similarity. We have therefore demonstrated the possibility of using VDPs to predict search performance in natural images, showing the utility of the Duncan and Humphreys model for such scenes. Further work is needed to develop this method to be able to predict search performance in scenes in which the background is continuous.

Lovell, P. G. Gilchrist, I. D. Tolhurst, D. J. To, M. Troscianko, T. (2008). Predicting search efficiency with a low-level visual difference model [Abstract]. Journal of Vision, 8(6):1082, 1082a, http://journalofvision.org/8/6/1082/, doi:10.1167/8.6.1082. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×