Purchase this article with an account.
Patrick Laflamme, James Enns; Superstitious perception by humans and convolutional neural networks. Journal of Vision 2017;17(10):807. doi: https://doi.org/10.1167/17.10.807.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Recent comparisons between convolutional neural networks (CNNs) trained to identify objects in images and the human visual system have suggested that the hierarchical nature of the two systems is quite similar (Cichy, Khosla, Pantazis, Torralba, & Oliva, 2016), and that activity in higher order areas of the CNN predicts activity in infero-temporal cortex (Yamins et al., 2014). However, systematic behavioural comparisons of CNNs and human vision are just beginning. Our approach is to use the well-studied domain of visual illusions, where humans make predictable "errors," to see if CNNs are governed by the same functional principles. We began with the phenomenon of superstitious perception (Gosselin & Schyns, 2003). Participants (n= 8) tried to identify targets in visual noise (total trials ~= 22,000), sometimes falsely selecting images as targets. Taking the average of the images identified falsely in this way results in a composite image resembling the target. We then compared two different methods for predicting which images were falsely identified. The first technique used the image-wise correlation between the target and the noisy image (Gosselin & Schyns, 2003), and it was able to discriminate participants' "target present" responses from "target absent" responses significantly with d-prime = 0.10, 95% CI = [0.074,0.13]. The second technique used the likelihood of target reports generated by CNNs trained to identify noisy images of the target to increasing levels of accuracy. In contrast to a naïve, untrained CNN, which showed no measureable sensitivity, CNNs trained to identify real targets hidden in visual noise were able to discriminate participants' responses with similar accuracy to image-wise correlation, d-prime = 0.10, 95% CI = [0.074,0.13]. While this implies that CNNs and humans use similar criteria for image identification, finer grained comparisons of the two methods also hint at important differences. These differences will be pursued in further experiments.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only