September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
The Relative Contribution of Features and Dimensions to Semantic Similarity
Author Affiliations
  • Marius Cătălin Iordan
    Princeton Neuroscience Institute, Princeton University
  • Cameron Ellis
    Psychology Department, Princeton University
  • Daniel Osherson
    Psychology Department, Princeton University
  • Jonathan Cohen
    Princeton Neuroscience Institute, Princeton University
    Psychology Department, Princeton University
Journal of Vision August 2017, Vol.17, 1245. doi:https://doi.org/10.1167/17.10.1245
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marius Cătălin Iordan, Cameron Ellis, Daniel Osherson, Jonathan Cohen; The Relative Contribution of Features and Dimensions to Semantic Similarity. Journal of Vision 2017;17(10):1245. https://doi.org/10.1167/17.10.1245.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Similarity governs our perception and reasoning, helping us to relate new stimuli to their long-established category labels and to generalize learned behaviors to novel situations. Similarity has often been explained as arising from commonality of features and parts (Attneave, 1950; Tversky & Hemmenway, 1984) and as a defining metric for categorization processes (Ashby & William 1991; McClelland & Rogers 2004). However, it remains unclear how feature-based (implicit) and explicit components of similarity combine to give rise to perceptual similarity for real-world objects. Here, we collected pairs of explicit unconstrained ('How similar are these two animals?') and dimension-cued similarity judgments, as well as feature ratings used to derive an implicit measure of similarity, for ten basic-level animals across twelve similarity dimensions (six objective: e.g. size; six subjective: e.g. cuteness), presented as either text labels or short videos. Participants' reported explicit similarity judgments were virtually unaffected by presentation modality (r=0.95). Feature-based similarity significantly predicted dimension-cued similarity (top half dimensions: r=0.63--0.92,p< 0.001) and dimension-cued similarity significantly predicted unconstrained similarity (top half dimensions: r=0.78--0.96,p< 0.001). However, feature-based similarity could not explain unconstrained similarity on a dimension-by-dimension basis, but only by linearly combining all implicit similarity dimensions into an aggregate measure (equal-weight: r=0.35,p< 0.05; optimal-weight: r=0.65,p< 0.001). Furthermore, we observed an interaction between subjectivity and explicitness: subjective implicit dimensions explained more variance for explicit similarity, while objective explicit dimensions explained more variance for unconstrained similarity (subjectivity main effect p< 0.01, interaction p< 0.01). Together, our results suggest that feature-based and dimension-cued similarity may combine in a non-trivial way based on feature subjectivity to help generate similarity judgments. Given recent work showing interaction of cognitive control and infero-temporal regions in computing similarity judgments (Keung et al., 2016), our results provide an interesting hypothesis for elucidating the neural components of similarity and its susceptibility to attention and other sources of bias.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×