Abstract
Introduction. Work by Ariely (2001) inspired interest in research using the judged mean size of a briefly presented set of stimuli, differing in size, as a prototypical example of a statistical summary representation (SSR). Like Ariely, many authors have concluded that mean size judgments rely on a global strategy – i.e., most members of the set are included in this calculation (Ariely, 2001). However, Myczek and Simons (2008) presented simulation results suggesting that mean-size judgments could result from a subsampling strategy. To explore whether subsampling is the appropriate mechanism to explain performance in the mean-size task, we used an efficiency analysis to compare performance across three tasks: two versions of the centroid task and the mean size task. Like the subsampling simulations, the efficiency analysis used in centroid-task research (Sun, Chubb, Wright, Sperling, 2015), is based on the degree that an ideal observer fails to register or include all of the stimuli in the calculation. Method. Observers were presented with a cloud of either 3 or 9 squares for 300 ms followed by a mask. In different sessions, observers were asked to estimate one of (a) the mean size of the stimuli, (b) the centroid of the stimuli ignoring the size differences, or (c) the centroid weighting the elements of the stimuli according to their size. Results. We found that efficiency was high in both centroid tasks, but substantially lower in the mean-size task. Conclusions. These results suggest that stimulus size is registered accurately and can be used effectively in the context of centroid judgments but not for judgments of mean size. Presumably, sources of error other than subsampling lead to the low efficiency observed when judging mean size. Given these results, size judgments may be a poor task to us to study SSRs.
Meeting abstract presented at VSS 2017