September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Taking a machine’s perspective: Humans can decipher adversarial images
Author Affiliations & Notes
  • Zhenglong Zhou
    Department of Psychological & Brain Sciences, Johns Hopkins University
  • Chaz Firestone
    Department of Psychological & Brain Sciences, Johns Hopkins University
Journal of Vision September 2019, Vol.19, 59a. doi:https://doi.org/10.1167/19.10.59a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhenglong Zhou, Chaz Firestone; Taking a machine’s perspective: Humans can decipher adversarial images. Journal of Vision 2019;19(10):59a. https://doi.org/10.1167/19.10.59a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How similar is the human visual system to the sophisticated machine-learning systems that mirror its performance? Models of object categorization based on convolutional neural networks (CNNs) have achieved human-level benchmarks in labeling novel images. These advances not only support new technologies, but may also serve as candidate models for human vision itself. However, unlike human vision, CNNs can be “fooled” by adversarial examples — carefully crafted images that appear as nonsense patterns to humans but are recognized as familiar objects by machines, or that appear as one object to humans and a different object to machines. This extreme divergence between human and machine classification challenges the promise of these new advances, both as applied image-recognition systems and as models of human vision. Surprisingly, however, little work has empirically investigated human classification of adversarial stimuli; do humans and machines fundamentally diverge? Here, we show that human and machine classification of adversarial stimuli are robustly related. We introduce a “machine-theory-of-mind” task in which observers are shown adversarial images and must anticipate the machine’s label from a set of various alternatives. Across eight experiments on five prominent and diverse adversarial imagesets, human subjects reliably identified the machine’s preferred labels over relevant foils. We observed this result not only in forced-choice settings between two candidate labels, but also when subjects freely chose among dozens of possible labels. Moreover, this pattern persisted for images with strong antecedent identities (e.g., an orange adversarially perturbed into a “power drill”), and even for images described in the literature as “totally unrecognizable to human eyes” (e.g., unsegmented patterns of colorful pixels that are classified as an “armadillo”). We suggest that human intuition may be a more reliable guide to machine (mis)classification than has typically been imagined, and we explore the consequences of this result for minds and machines alike.

Acknowledgement: JHU Science of Learning Institute 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×