September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Is Mean Size a Good Example of a Statistical Summary Representation? Centroid versus Mean Size Judgments
Author Affiliations
  • Laris RodriguezCintron
    Cognitive Sciences, University of California Irvine
  • Charles Wright
    Cognitive Sciences, University of California Irvine
  • Charles Chubb
    Cognitive Sciences, University of California Irvine
Journal of Vision August 2017, Vol.17, 53. doi:https://doi.org/10.1167/17.10.53
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laris RodriguezCintron, Charles Wright, Charles Chubb; Is Mean Size a Good Example of a Statistical Summary Representation? Centroid versus Mean Size Judgments. Journal of Vision 2017;17(10):53. https://doi.org/10.1167/17.10.53.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction. Work by Ariely (2001) inspired interest in research using the judged mean size of a briefly presented set of stimuli, differing in size, as a prototypical example of a statistical summary representation (SSR). Like Ariely, many authors have concluded that mean size judgments rely on a global strategy – i.e., most members of the set are included in this calculation (Ariely, 2001). However, Myczek and Simons (2008) presented simulation results suggesting that mean-size judgments could result from a subsampling strategy. To explore whether subsampling is the appropriate mechanism to explain performance in the mean-size task, we used an efficiency analysis to compare performance across three tasks: two versions of the centroid task and the mean size task. Like the subsampling simulations, the efficiency analysis used in centroid-task research (Sun, Chubb, Wright, Sperling, 2015), is based on the degree that an ideal observer fails to register or include all of the stimuli in the calculation. Method. Observers were presented with a cloud of either 3 or 9 squares for 300 ms followed by a mask. In different sessions, observers were asked to estimate one of (a) the mean size of the stimuli, (b) the centroid of the stimuli ignoring the size differences, or (c) the centroid weighting the elements of the stimuli according to their size. Results. We found that efficiency was high in both centroid tasks, but substantially lower in the mean-size task. Conclusions. These results suggest that stimulus size is registered accurately and can be used effectively in the context of centroid judgments but not for judgments of mean size. Presumably, sources of error other than subsampling lead to the low efficiency observed when judging mean size. Given these results, size judgments may be a poor task to us to study SSRs.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×