October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Unintended cue learning: Lessons for deep learning from experimental psychology
Author Affiliations
  • Robert Geirhos
    University of Tuebingen
    International Max Planck Research School for Intelligent Systems
  • Jörn-Henrik Jacobsen
    University of Toronto, Vector Institute
  • Claudio Michaelis
    University of Tuebingen
    International Max Planck Research School for Intelligent Systems
  • Richard Zemel
    University of Toronto, Vector Institute
  • Wieland Brendel
    University of Tuebingen
  • Matthias Bethge
    University of Tuebingen
  • Felix A. Wichmann
    University of Tuebingen
Journal of Vision October 2020, Vol.20, 652. doi:https://doi.org/10.1167/jov.20.11.652
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann; Unintended cue learning: Lessons for deep learning from experimental psychology. Journal of Vision 2020;20(11):652. doi: https://doi.org/10.1167/jov.20.11.652.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recently, Deep Neural Networks (DNNs) have become a major tool and model in vision science. However, DNNs often fail unexpectedly. For example, they are highly vulnerable to noise and struggle to transfer their performance from the lab to the real world. In experimental psychology, unexpected failures are often the consequence of unintended cue learning. For example, rats trained to perform a colour discrimination experiment may appear to have learned the task but fail unexpectedly once the odour of the colour paint is controlled for, revealing that they exploited an unintended cue---smell---to solve what was intended to be a vision experiment. Here we ask whether unexpected failures of DNNs too may be caused by unintended cue learning. We demonstrate that DNNs are indeed highly prone to picking up on subtle unintended cues: neural networks love to cheat. For instance, in a simple classification paradigm with two equally predictive cues, object silhouette and object location, human observers unanimously relied on object silhouette whereas DNNs used object location, a strategy which fails once an object appears at a different location. Drawing parallels to other recent findings, we show that a wide variety of DNN failures can be understood as a consequence of unintended cue learning: their over-reliance on object background and context, adversarial examples, and a number of stunning generalisation errors. The perspective of unintended cue learning unifies some of the key challenges for DNNs as useful models of the human visual system. Drawing inspiration from experimental psychology (with its years of expertise in identifying unintended cues), we argue that we will need to exercise great care before attributing high-level abilities like "object recognition" or "scene understanding" to machines. Taken together, this opens up an opportunity for the vision sciences to contribute towards a better and more cautionary understanding of deep learning.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×