September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Learning Relational Categories through Guided Comparisons
Author Affiliations & Notes
  • Andrew Jun Lee
    University of California, Los Angeles
  • Hongjing Lu
    University of California, Los Angeles
  • Keith Holyoak
    University of California, Los Angeles
  • Footnotes
    Acknowledgements  NSF Grant IIS-1956441
Journal of Vision September 2024, Vol.24, 663. doi:https://doi.org/10.1167/jov.24.10.663
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrew Jun Lee, Hongjing Lu, Keith Holyoak; Learning Relational Categories through Guided Comparisons. Journal of Vision 2024;24(10):663. https://doi.org/10.1167/jov.24.10.663.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual scenes are not perceived as simple constellations of objects, but rather as objects in relation to one another. Humans can efficiently learn visual categories based on relational knowledge from just a handful of examples; however, the learning mechanisms remain unclear. Here, we hypothesize that analogical comparisons can facilitate learning of visual categories defined by relations. Method: We examined learning using the Synthetic Visual Reasoning Test (SVRT), a collection of 23 relational category learning problems (Fleuret et al., 2011). Each problem consists of images involving artificial-island-shaped objects; positive image exemplars instantiated a rule based on spatial relations and negative exemplars did not. Participants categorized each successive test image into the correct set until an accuracy criterion was met. Feedback was provided on each trial. We conducted two experiments that varied the display format and coloring scheme for the SVRT images. In both experiments, images from previous trials were displayed on the screen as a visual record. In Experiment 1, these record images were either spatially segregated or intermixed by category membership. In Experiment 2, the record images were colored in a way that differentiated object entities based on relations. The colored display of the record images thus guided analogical comparisons between them. Results: Learning was more efficient when prior images in the display were spatially segregated by category membership, resulting in an average 53% reduction in proportion of SVRT problem failures. Furthermore, when objects were assigned with corresponding colors to facilitate the alignment of related objects across images, learning was more efficient relative to the uncolored condition (33% reduction in failure proportion). Conclusion: Human learning of visual relational categories depends on the ability to efficiently extract relational knowledge from visual inputs. Visual displays that facilitate relation extraction promote learning on the basis of analogical comparisons.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×