August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
THINGS-drawings: A large-scale dataset containing human sketches of 1,854 object concepts
Author Affiliations & Notes
  • Judith E. Fan
    University of California, San Diego
  • Kushin Mukherjee
    University of Wisconsin-Madison
  • Holly Huey
    University of California, San Diego
  • Martin N. Hebart
    Justus Liebig University, Giessen, Germany
    Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
  • Wilma A. Bainbridge
    University of Chicago
  • Footnotes
    Acknowledgements  NSF CAREER Award #2047191
Journal of Vision August 2023, Vol.23, 5975. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Judith E. Fan, Kushin Mukherjee, Holly Huey, Martin N. Hebart, Wilma A. Bainbridge; THINGS-drawings: A large-scale dataset containing human sketches of 1,854 object concepts. Journal of Vision 2023;23(9):5975.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

People’s knowledge about objects has traditionally been probed using a combination of feature-listing and rating tasks. However, feature listing fails to capture nuances in what people know about how objects look — their visual knowledge — which cannot easily be described in words. Moreover, rating tasks are limited by the set of attributes that researchers even think to consider. By contrast, freehand sketching provides a way for people to externalize their visual knowledge about objects in an open-ended fashion. As such, sketch behavior provides a versatile substrate for asking a wide range of questions about visual object knowledge that go beyond the scope of a typical study. Here we introduce THINGS-drawings, a new crowdsourced dataset containing multiple freehand sketches of the 1,854 object concepts in the THINGS database (Hebart et al., 2019). THINGS-drawings contains fine-grained information about the stroke-by-stroke dynamics by which participants produced each sketch, as well as a rich set of other metadata, including ratings on various attributes, feature lists, and demographic characteristics of the participants contributing each sketch. As such, THINGS-drawings provides more comprehensive coverage of real-world visual concepts than previous sketch datasets (Eitz et al., 2012; Sangkloy et al., 2016; Jongejan, et al., 2016), which contain less richly annotated sketches of a smaller number of concepts (i.e., ~100-300). This broader scope enables stronger tests of the capabilities of current artificial intelligence systems to understand abstract visual inputs, and thus a benchmark for driving the development of systems that display more human-like image understanding across visual modalities. Moreover, we envision THINGS-drawings as a resource to the vision science community for investigating the richer aspects of many perceptual and cognitive phenomena in a unified manner, including visual imagery, memorability, semantic cognition, and visual communication.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.