October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Which “shoe” is best? Humans know what good training examples look like
Author Affiliations
  • Makaela Nartker
    Johns Hopkins University
  • Michael Lepori
    Johns Hopkins University
  • Chaz Firestone
    Johns Hopkins University
Journal of Vision October 2020, Vol.20, 1318. doi:https://doi.org/10.1167/jov.20.11.1318
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Makaela Nartker, Michael Lepori, Chaz Firestone; Which “shoe” is best? Humans know what good training examples look like. Journal of Vision 2020;20(11):1318. https://doi.org/10.1167/jov.20.11.1318.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In order to recognize something as belonging to a category (e.g., to represent something as a dog, or a shoe), one must first see examples of that category (e.g., specific dogs, or specific shoes). Which examples teach best? This is a problem we routinely investigate as vision researchers — e.g., when studying category learning or perceptual expertise. But it is also one we confront as people interacting with others — e.g., when we teach peers, pets, or children what things look like. This raises an intriguing question: Do ordinary people know what makes a good training example? Here, we exploit machine recognition to ask whether naive subjects have accurate intuitions about which examples are best for learning new visual categories. We told subjects about a “robot” and asked them to teach it to recognize the numbers 1, 2, and 3. Subjects saw handwritten digits from the MNIST database, and selected the digits they thought would be best for learning those categories. We then trained a classifier on subjects’ choices, and discovered that subject-chosen examples produced higher classification accuracy (on an independent test-set) than examples that subjects rejected. Follow-up experiments showed that subjects were sensitive to differences that were salient to our classifier. When the difference in classifier performance on two sets was small, subjects had trouble choosing the better set; but when that difference was large, subjects consistently chose the better set. Moreover, these effects generalized beyond digits, to images of real objects: For example, subjects also successfully chose which sneakers and boots would best teach a classifier to recognize those objects. These results reveal that people have surprisingly accurate intuitions for how others learn what the world looks like, and suggest a novel way to use “machine minds” to study human minds.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×