September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   May 2019
Flipped on its Head: Deep Learning-Based Saliency Finds Asymmetry in the Opposite Direction Expected for Singleton Search of Flipped and Canonical Targets
Author Affiliations & Notes
  • Calden Wloka
    Electrical Engineering and Computer Science, Lassonde School of Engineering, York University
    Centre for Vision Research, York University
  • John K Tsotsos
    Electrical Engineering and Computer Science, Lassonde School of Engineering, York University
    Centre for Vision Research, York University
Journal of Vision May 2019, Vol.19, 318. doi:https://doi.org/10.1167/19.10.318
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Calden Wloka, John K Tsotsos; Flipped on its Head: Deep Learning-Based Saliency Finds Asymmetry in the Opposite Direction Expected for Singleton Search of Flipped and Canonical Targets. Journal of Vision 2019;19(10):318. doi: https://doi.org/10.1167/19.10.318.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A search asymmetry occurs when it is faster for an observer to find some search target A amongst a set of distractors B than when searching for a target B amongst a set of A distractors. A number of search asymmetries are well-established in humans, but the phenomenon is less well studied by research into computational saliency models. Nevertheless, if these algorithsm are truly representing a component of early human visual attention, they should also be able to account for aspects of human attention beyond simply testing prediction performance on free-viewing fixation datasets (Bruce et al., 2015). Leveraging the recently developed Saliency Model Implementation Library for Experimental Research (SMILER), we devise a set of visual search arrays and test whether learned models of saliency exhibit an asymmetry of performance for targets with a novel flipped orientation over canonically oriented targets. Our findings show that the deep learning approach to computational saliency modelling which currently dominates the field consistently displays an asymmetric preference for canonically oriented stimuli. This asymmetry in performance runs counter to the behavioural performance patterns expected of human subjects, and suggests that the pattern-matching nature of deep learning is insufficient to fully account for human judgements of target salience.

Acknowledgement: The Canada Research Chairs Program, the Natural Sciences and Engineering Research Council of Canada, the Air Force Office of Scientific Research USA, the Office of Naval Research USA 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×