September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Revealing the behaviorally-relevant dimensions underlying mental representations of objects
Author Affiliations & Notes
  • Martin N Hebart
    Laboratory of Brain and Cognition, National Institute of Mental Health
  • Charles Y Zheng
    Section on Functional Imaging Methods, National Institute of Mental Health
  • Francisco Pereira
    Section on Functional Imaging Methods, National Institute of Mental Health
  • Chris I Baker
    Laboratory of Brain and Cognition, National Institute of Mental Health
Journal of Vision September 2019, Vol.19, 170b. doi:https://doi.org/10.1167/19.10.170b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Martin N Hebart, Charles Y Zheng, Francisco Pereira, Chris I Baker; Revealing the behaviorally-relevant dimensions underlying mental representations of objects. Journal of Vision 2019;19(10):170b. https://doi.org/10.1167/19.10.170b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans can identify and categorize visually-presented objects rapidly and without much effort, yet for our everyday interactions with the world some object dimensions (e.g. shape or function) matter more than others. While these behaviorally-relevant dimensions are believed to form the basis of our mental representations of objects, their characterization typically depends on small-scale experiments with synthetic stimuli, often with pre-defined dimensions, thus leaving open the large-scale structure of the behavioral representations on which we ground object recognition and categorization. To fill this gap, we used large-scale online crowdsourcing of behavioral choices in a triplet odd-one-out similarity task. Based on natural images of 1,854 distinct objects and ~1.5 million behavioral responses, we developed a data-driven computational model (sparse positive embedding) that identifies object dimensions by learning to predict behavior in this task. Despite this dataset representing only 0.15% of all possible trials, cross-validated performance was excellent, correctly predicting 63% of individual human responses and approaching noise ceiling (67%). Further, the similarity structure between objects derived from those dimensions exhibited a close correspondence to a reference similarity matrix of 48 objects (r = 0.90). The model identified 49 interpretable dimensions, representing degrees of taxonomic membership (e.g. food), function (e.g. transportation), and perceptual properties (e.g. shape, texture, color). The dimensions were predictive of external behavior, including human typicality judgments, category membership, and object feature norms, suggesting that the dimensions reflect mental representations of objects that generalize beyond the similarity task. Further, independent participants (n = 20) were able to assign values to the dimensions of 20 separate objects, reproducing their similarity structure with high accuracy (r = 0.84). Together, these results reveal an interpretable representational space that accurately describes human similarity judgments for thousands of objects, thus offering a pathway towards a generative model of visual similarity judgments based on the comparison of behaviorally-relevant object dimensions.

Acknowledgement: Feodor-Lynen Fellowship by the Alexander-von-Humboldt Foundation 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×