August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Ecologically Valid Categorization and Best-Classifer Feedback
Author Affiliations
  • Sarah Williams
    University of Central Florida
  • Andrew Wismer
    University of Central Florida
  • Troy Schiebel
    University of Central Florida
  • Corey Bohil
    University of Central Florida
Journal of Vision September 2016, Vol.16, 405. doi:https://doi.org/10.1167/16.12.405
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sarah Williams, Andrew Wismer, Troy Schiebel, Corey Bohil; Ecologically Valid Categorization and Best-Classifer Feedback. Journal of Vision 2016;16(12):405. https://doi.org/10.1167/16.12.405.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Classification training typically involves viewing a series of category examples, making a classification response to each, and receiving corrective feedback regarding category membership. Objective feedback (i.e., based on actual category membership) suggests that perfect accuracy is possible even when it may not be (e.g., overlap exists between categories). Previous research has shown this type of feedback can be detrimental to learning an optimal (long-run reward-maximizing) decision criterion by fostering excessive attention to trial-to-trial accuracy. Some accuracy must be sacrificed to maximize long-run reward (Bohil, Wismer, Schiebel, & Williams, 2015; Bohil & Maddox, 2003). Thus, it is important to consider other types of feedback for training, such as using the responses of an "optimal" performer to create feedback that indicates that using even the optimal response criterion produces occasional response errors. In the current study, normal or cancer-containing mammograms were used to assess how feedback influences classification. Participants earned more points for correct "cancer" than correct "normal" responses. Feedback was given in one of two forms: objective (based on actual category membership) or based on a "best" classifier (i.e., the responses of the nearest-optimal performer from an earlier classification study). Critically, the performance of an optimal or "best" classifier indicates that even when using the best possible classification criterion, errors should be expected, as opposed to objective feedback which implies 100% accuracy may be possible when it is not. Signal detection analyses indicated decision criterion values that were closer to optimal in the best-classifier condition. Participants trained with best-classifier feedback also had higher point totals and a reduction (as predicted) in overall response accuracy compared to participants trained with objective feedback. This work replicates earlier research using simple artificial stimuli, and shows that feedback reflecting a more-attainable performance level supports more optimal decision criterion placement and performance.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×