Purchase this article with an account.
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann; Unintended cue learning: Lessons for deep learning from experimental psychology. Journal of Vision 2020;20(11):652. doi: https://doi.org/10.1167/jov.20.11.652.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Recently, Deep Neural Networks (DNNs) have become a major tool and model in vision science. However, DNNs often fail unexpectedly. For example, they are highly vulnerable to noise and struggle to transfer their performance from the lab to the real world. In experimental psychology, unexpected failures are often the consequence of unintended cue learning. For example, rats trained to perform a colour discrimination experiment may appear to have learned the task but fail unexpectedly once the odour of the colour paint is controlled for, revealing that they exploited an unintended cue---smell---to solve what was intended to be a vision experiment. Here we ask whether unexpected failures of DNNs too may be caused by unintended cue learning. We demonstrate that DNNs are indeed highly prone to picking up on subtle unintended cues: neural networks love to cheat. For instance, in a simple classification paradigm with two equally predictive cues, object silhouette and object location, human observers unanimously relied on object silhouette whereas DNNs used object location, a strategy which fails once an object appears at a different location. Drawing parallels to other recent findings, we show that a wide variety of DNN failures can be understood as a consequence of unintended cue learning: their over-reliance on object background and context, adversarial examples, and a number of stunning generalisation errors. The perspective of unintended cue learning unifies some of the key challenges for DNNs as useful models of the human visual system. Drawing inspiration from experimental psychology (with its years of expertise in identifying unintended cues), we argue that we will need to exercise great care before attributing high-level abilities like "object recognition" or "scene understanding" to machines. Taken together, this opens up an opportunity for the vision sciences to contribute towards a better and more cautionary understanding of deep learning.
This PDF is available to Subscribers Only