September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
The contribution of features, shape, and semantics to object similarity
Author Affiliations & Notes
  • Brent Pitchford
    University of Iceland
    Icelandic Vision Lab
  • Inga María Ólafsdóttir
    Reykjavik University
    Icelandic Vision Lab
  • Marelle Maeekalle
    University of Iceland
    Icelandic Vision Lab
  • Heida Maria Sigurdardottir
    University of Iceland
    Icelandic Vision Lab
  • Footnotes
    Acknowledgements  This work was supported by The Icelandic Research Fund (Grants No. 228916 and 218092) and the University of Iceland Research Fund
Journal of Vision September 2024, Vol.24, 1421. doi:https://doi.org/10.1167/jov.24.10.1421
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brent Pitchford, Inga María Ólafsdóttir, Marelle Maeekalle, Heida Maria Sigurdardottir; The contribution of features, shape, and semantics to object similarity. Journal of Vision 2024;24(10):1421. https://doi.org/10.1167/jov.24.10.1421.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Object similarity may not be an abstract construct that can be defined outside of the operational definition of task context. We asked people to assess the similarity of objects by rating their semantic relatedness, overall shape, and internal features. Shape similarity was assessed by rating object silhouettes with no internal features. Featural similarity was assessed by rating grayscale objects where global shape was distorted. Object pairs were either different at the basic level (e.g., hairbrush, pipe) or at the subordinate level (e.g., two different bowties). Semantic similarity of objects differing at the basic level was measured by rating similarity in meaning of word pairs. We then assessed to which degree semantics, shape, and features predicted a) explicit judgments of visual similarity of objects, b) implicit measures of object similarity as assessed by object foraging, and c) similarity in an object space derived from activations of a deep layer of a convolutional neural network trained on object classification. Explicit judgments of visual similarity were predicted both by features and shapes, but not semantics. Unlike explicit judgments, implicit object similarity depended on whether people searched for target objects among distractors of the same or different category. If targets and distractors differed at the basic level, both shape and semantic similarity predicted unique variability in foraging not accounted for by features. If objects belonged to the same category, featural similarity predicted unique variability not accounted for by shape. Contrary to previous suggestions that neural networks are primarily feature-based, shape uniquely explained variability in object space distance not accounted for by features in cases where objects differed at the basic level. Different information therefore contributes to people’s explicit vs. implicit judgments of object qualities – and can also be distinguished from measures of similarity extracted from artificial neural networks trained on object classification.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×