Abstract
A search asymmetry occurs when it is faster for an observer to find some search target A amongst a set of distractors B than when searching for a target B amongst a set of A distractors. A number of search asymmetries are well-established in humans, but the phenomenon is less well studied by research into computational saliency models. Nevertheless, if these algorithsm are truly representing a component of early human visual attention, they should also be able to account for aspects of human attention beyond simply testing prediction performance on free-viewing fixation datasets (Bruce et al., 2015). Leveraging the recently developed Saliency Model Implementation Library for Experimental Research (SMILER), we devise a set of visual search arrays and test whether learned models of saliency exhibit an asymmetry of performance for targets with a novel flipped orientation over canonically oriented targets. Our findings show that the deep learning approach to computational saliency modelling which currently dominates the field consistently displays an asymmetric preference for canonically oriented stimuli. This asymmetry in performance runs counter to the behavioural performance patterns expected of human subjects, and suggests that the pattern-matching nature of deep learning is insufficient to fully account for human judgements of target salience.
Acknowledgement: The Canada Research Chairs Program, the Natural Sciences and Engineering Research Council of Canada, the Air Force Office of Scientific Research USA, the Office of Naval Research USA