August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Object categorization performance modeled using multidimensional scaling and category-consistent features
Author Affiliations
  • Michael Hout
    New Mexico State University
  • Justin Maxfield
    Stony Brook University
  • Arryn Robbins
    New Mexico State University
  • Gregory Zelinsky
    Stony Brook University
Journal of Vision September 2016, Vol.16, 250. doi:10.1167/16.12.250
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Michael Hout, Justin Maxfield, Arryn Robbins, Gregory Zelinsky; Object categorization performance modeled using multidimensional scaling and category-consistent features. Journal of Vision 2016;16(12):250. doi: 10.1167/16.12.250.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ability to categorize objects is a ubiquitous human behavior that, like many aspects of cognition, is accomplished so rapidly as to seldom enter consciousness. Yet somehow an instance of a basset hound is classified as a family pet, a dog, or an animal, and not a cat or duck or desk. Our work addresses the complex visual similarity relationships within and between categories that make possible this fundamental cognitive behavior. We studied these similarity relationships using two complementary approaches: 1) Multidimensional Scaling (MDS) data obtained from human observers; and 2) Category-Consistent Features (CCFs), the important features of a target category, computationally-defined as the high-frequency and low-variability features found across images of category exemplars (Maxfield et al., VSS2015). Participants provided spatial similarity ratings for 144 objects (from 4 superordinate-level categories, each with 4 nested basic-level categories, and 3 nested subordinates). Ratings were then subjected to MDS analysis, which successfully recovered the subordinate, basic, and superordinate-level category clusters within our stimuli. We then identified "centroids" for categories at each level of the hierarchy, and used the distance of each category centroid from the target or lure (an object from the same parent category as the target) to predict the performance of other participants (leave-one-out method) in a categorical search task. We found that behavioral similarity ratings reliably predict categorical search performance (e.g., time to fixate a target exemplar or lure), and did so better than a random model. These findings also align with CCF-model performance, which defines similarity computationally, based on the visual features that are shared among category exemplars. Taken together, our findings suggest that human-based and computational methods of quantifying visual similarity offer important and complementary insights into how similarity impacts people's ability to represent object categories across a hierarchy, and use these representations to conduct search.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×