October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Visual and semantic similarity norms for a new object and scene photographic image set
Author Affiliations
  • Zhuohan Jiang
    Smith College
  • D. Merika W. Sanders
    University of Massachusetts, Amherst
  • Rosemary A. Cowell
    University of Massachusetts, Amherst
Journal of Vision October 2020, Vol.20, 1567. doi:https://doi.org/10.1167/jov.20.11.1567
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhuohan Jiang, D. Merika W. Sanders, Rosemary A. Cowell; Visual and semantic similarity norms for a new object and scene photographic image set. Journal of Vision 2020;20(11):1567. https://doi.org/10.1167/jov.20.11.1567.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Photographic images of objects and scenes are widely used as stimuli in studies of memory and perception, in both behavioral and neuroimaging paradigms. Many repositories for color photographs of objects and scenes are publicly available, offering a range of valuable features such as standardized photographic composition (e.g., viewing and illumination angle), large numbers of exemplars in specific sub-categories (e.g., animate/inanimate; indoor/outdoor; everyday objects; faces), or standardized visual features (e.g., Greebles; stimuli normalized for low-level image properties). However, most of these sets do not provide quantitative data about the subjective relations between images within a set from the perspective of a human observer, i.e., perceptual and semantic similarity. This information is valuable because stimulus similarity influences many cognitive processes. The aim of the present study was to create a database of object and scene color photographs with both visual and semantic similarity ratings among images within well-defined sub-categories of the objects and scenes. We used Amazon’s Mechanical Turk to collect subjective similarity ratings – both visual and semantic – for 240 color photographs in four sub-categories (60 animate objects, 60 inanimate objects, 60 indoor scenes, and 60 outdoor scenes). Next, we implemented multidimensional scaling (MDS) to create a visual and semantic similarity space for each sub-category, and used automated clustering methods to provide similarity-based groupings of stimuli within the sub-categories. The stimulus set, similarity ratings, and methods for analyzing and grouping the stimuli by similarity will be made publicly available.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×