December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
THINGS+: new norms and metadata for the THINGS database of 1,854 object concepts and 26,107 natural object images
Author Affiliations
  • Laura Stoinski
    Max Planck Institute for Human Cognitive & Brain Sciences
  • Jonas Perkuhn
  • Martin Hebart
Journal of Vision December 2022, Vol.22, 3247. doi:https://doi.org/10.1167/jov.22.14.3247
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laura Stoinski, Jonas Perkuhn, Martin Hebart; THINGS+: new norms and metadata for the THINGS database of 1,854 object concepts and 26,107 natural object images. Journal of Vision 2022;22(14):3247. https://doi.org/10.1167/jov.22.14.3247.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To study visual object processing, the need for well-curated object concepts and images has grown significantly over the past years. To address this, we have previously developed THINGS (Hebart et al., 2019), a large-scale database of 1,854 systematically sampled object concepts with 26,107 high-quality naturalistic images of these concepts. With THINGS+, we aim at extending THINGS by adding concept-specific and image-specific norms and metadata. Concept-specific norms were collected for all 1,854 object concepts for the object properties real-world size, manmadeness, preciousness, liveliness, heaviness, naturalness, ability to move, graspability, holdability, ability to be moved, pleasantness, and arousal. Further, we extended high-level categorization to 53 superordinate categories and collected typicality ratings for members of all 53 categories. Image-specific metadata includes measures of nameability and recognizability for objects in all 26,107 images. To this end, we asked participants to provide labels for prominent objects depicted in each of the 26,107 images and measured the alignment with the original object concept. Finally, to be able to present example images in publications without copyright restrictions, we identified one new public domain image per object concept. The results showed high consistency of property (r = 0.84-0.99, M = 0.96, SD = 0.41) and typicality ratings (r = 0.88-0.98; M = 0.96 , SD = 0.19). Correlations of our data with external norms were moderate to high for object properties (r = 0.3-0.95; M = 0.84, SD = 0.41) and typicality scores (r = 0.72-0.88; M = 79, SD = 0.18). To summarize, THINGS+ provides a broad, externally-validated extension to existing object norms and an important extension to THINGS as a general resource of object concepts, images, and category memberships. Our norms, metadata, and images allow elaborate selection of stimuli and control variables for a wide range of research interested in object processing and semantic memory.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×