December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Revealing feature spaces underlying similarity judgments of natural scenes in individual participants
Author Affiliations & Notes
  • Peter Brotherwood
    cerebrUM, Département de Psychologie, Université de Montréal, Canada
    CHBH, School of Psychology, University of Birmingham, UK
  • Andrey Barsky
    CHBH, School of Psychology, University of Birmingham, UK
  • Kendrick Kay
    Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, United States
  • Ian Charest
    cerebrUM, Département de Psychologie, Université de Montréal, Canada
    CHBH, School of Psychology, University of Birmingham, UK
  • Footnotes
    Acknowledgements  Collection of the Natural Scenes Dataset was supported by NSF IIS-1822683 and NSF IIS-1822929.
Journal of Vision December 2022, Vol.22, 3756. doi:https://doi.org/10.1167/jov.22.14.3756
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Peter Brotherwood, Andrey Barsky, Kendrick Kay, Ian Charest; Revealing feature spaces underlying similarity judgments of natural scenes in individual participants. Journal of Vision 2022;22(14):3756. https://doi.org/10.1167/jov.22.14.3756.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Understanding and responding to the dynamic environment around us requires continuous mapping of sensory information to representational space in our minds. This space can be characterised by a collection of features (e.g. location, animacy, and function) that drive perceived similarities between natural scenes. Resolving the structure of this internal multidimensional space is critical for understanding how we perceive, distinguish, and categorize natural scenes. Here, we collected similarity judgment data using a multiple arrangements task in eight participants for a set of 100 diverse natural scenes. To estimate underlying feature dimensions forming the basis of these similarity judgments, we trained small neural networks to learn object features. For each participant, we randomly initialised a 90-dimensional embedding layer, and used soft-max filtered dot products to predict the participant’s behavioral choices by identifying, for a given trio of stimuli, the stimulus most dissimilar from the other two. Loss estimated from these “odd-one-out” triplet predictions was backpropagated to update embedding weights. Cross-validated cosine distances computed from the learned embeddings were then used to construct model-generated representational dissimilarity matrices (RDMs). The resulting RDMs were significantly correlated with the same-subject RDMs computed from the observed similarity judgement data using classic iterative weighted averaging procedures³ (average Spearman’s rho=0.887; p<0.001), suggesting that the learned embeddings capture multidimensional similarity spaces at the individual participant level. The learned embeddings of each participant showed interpretable dimensions representing a variety of high and low order object features. Additionally, inter-subject comparison of model-generated RDMs showed increased similarity (average Spearman’s rho=0.29) between subjects in contrast to inter-subject comparison of RDMs generated by iterative weighted averaging (0.24). This suggests that our model is not only capable of reproducing similarity judgement data and capturing the underlying basis of these judgements at the individual level, but also able to better capture idiosyncratic relationships between individual participants.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×