December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Representations of object-dissimilarity before and after concept learning
Author Affiliations
  • Jonathan K. Doyon
    George Washington University
  • Sarah Shomstein
    George Washington University
  • Gabriela Rosenblau
    George Washington University
Journal of Vision December 2022, Vol.22, 3726. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jonathan K. Doyon, Sarah Shomstein, Gabriela Rosenblau; Representations of object-dissimilarity before and after concept learning. Journal of Vision 2022;22(14):3726.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Adapting to novel environments requires reorganizing existing knowledge through learning. Mechanistic accounts for how humans represent and dynamically update knowledge remains poorly understood. Here, we characterize how object-similarities are represented and how they change during learning. In two experiments, participants completed online tasks requiring object-sorting by similarity and implicit learning about object-features via trial-level feedback. We hypothesized that the learned associations among objects dynamically change following implicit learning. In Experiment 1, participants (N=241, 141 women, M_age=22 years) spatially arranged subsets of 120 object-pictures from pre-defined categories (activities/fashion/foods). Object-pair dissimilarity representations were constructed via multidimensional scaling. Participants recovered coarse (e.g., foods/non-foods) and sub-category (e.g., healthy foods/desserts) structures. Same-category item-pairs were placed closer together. Dimensionality reduction revealed three principal components explaining 61% of the variance (general performance, foods vs. non-foods, fashion vs. activities). In Experiment 2, participants (N=87, 60 women, M_age=22 years) completed an implicit-learning task. Given pseudo-words meaning colorful or large, participants rated how much the pseudo-word (e.g., ation) applied to subsets of Experiment 1’s stimuli, updating their prediction about the word’s meaning based on feedback. In both conditions, participants minimized prediction errors (difference between ratings and feedback) over time. After learning, participants completed 10 minutes of the arrangement task from Experiment 1. Object similarities depended on category membership, (cf. Experiment 1). Importantly, arrangement was additionally based on learned features, i.e. largeness and colorfulness. In the colorful condition, 32% of participants correctly identified learning about colorful; in the large condition, 15% identified the concept large (8% reported to have learned about colorful). Pre- and post-learning dissimilarity matrices were predicted by disparities in colorfulness, but not by disparities in largeness, corroborating that colorful was the dominant feature across learning conditions. Together, we find that semantic categories are key dimensions in object representations. Critically, we show how knowledge representations are reorganized through learning.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.