June 2004
Volume 4, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2004
Classification image weights can discriminate between prototype and exemplar category representations
Author Affiliations
  • Jason M. Gold
    Indiana University, Bloomington Indiana, USA
Journal of Vision August 2004, Vol.4, 661. doi:https://doi.org/10.1167/4.8.661
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jason M. Gold, Andrew L. Cohen, Richard Shiffrin; Classification image weights can discriminate between prototype and exemplar category representations. Journal of Vision 2004;4(8):661. https://doi.org/10.1167/4.8.661.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: A fundamental aspect of pattern recognition is the ability to form categories. According to prototype models, an observer's category representation consists of a single summary abstraction that is the central tendency of the individual members of the category. Classification decisions are then based on the similarity of an individual test item to the category prototypes. According to exemplar models, individual members of a category are stored and classification decisions are based on the separate similarities of a test item to each of the stored items. We show that these two classes of models make different predictions as to the relative weighting of stimulus features in classification images that are conditioned upon the item presented. We then use these predictions to determine the category representations used by human observers in a simple categorization task. Methods: 3 observers participated in a 2-alternative spatial categorization task. A white square appeared on a gray background in 1 of 4 spatially distinct, fixed locations. These locations were randomly chosen from one of the four corners of a large square. On each trial, the observer determined whether the white square had appeared in one of two categories: ‘above’ or ‘below’ fixation. Gaussian pixel noise was added to each of the 4 possible white square locations. The contrast of the white square was varied across trials to maintain ∼71% correct performance. Results & Conclusions: Classification images were computed by correlating the external stimulus noise with observer responses for each stimulus-response combination. The classification image weights obtained from the 3 observers were nearly identical to those obtained from a simulated observer that used an exemplar category representation and clearly different from the pattern obtained from an observer that used a prototype representation. We are currently exploring the generality of the technique by applying it to other tasks and stimuli.

Gold, J. M., Cohen, A. L., Shiffrin, R.(2004). Classification image weights can discriminate between prototype and exemplar category representations [Abstract]. Journal of Vision, 4( 8): 661, 661a, http://journalofvision.org/4/8/661/, doi:10.1167/4.8.661. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×