In recent years, there has been a significant interest in understanding how this efficient statistical summary representation process is implemented cognitively. Is there a special mechanism designed to compute summary statistics efficiently? Or is this ability just the result of smart sampling strategies where we attend to and remember a few items in working memory, and use them to derive summary statistics? Ariely (
2001) and Chong and Treisman (
2003,
2005) argued that mean size extraction involves a parallel process, in part because calculating the average size of an array seems quick and effortless. In addition, they showed that participants do not necessarily have access to the identities of individual items even when they have access to the mean size (Ariely,
2001), and that performance at judging mean size is relatively unaffected by variations in the number of items shown, the variability of those items, and the duration they are shown (Chong & Treisman,
2003). However, there are alternative strategies that could allow participants to estimate the average size of items without the need to invoke a fully parallel process focused on calculating mean size. For example, Myczek and Simons (
2008) proposed several alternative strategic accounts to the global, parallel process for mean size extraction. They showed that size discrimination could be performed through subsampling the arrays. For example, simulated accuracy patterns when two or three items were sampled and averaged from an array could produce accuracy patterns close to, or exceeding that of human participants. The authors were cautious that participants might not actually carry out this exact subsampling heuristic, but noted that similar sampling strategies provided a means to perform the discrimination task. Since Myczek and Simons (
2008) proposed this account, a great deal of work has focused on parsing out the actual cognitive mechanisms of extracting ensemble information about the mean of a set (e.g., Chong, Joo, Emmanouil, & Treisman,
2008; Simons & Myczek,
2008). For example, Allik, Toom, Raidvec, Averin, & Kreegipuu (
2013) suggested that most of the variance in a mean discrimination task could be explained by a simple model taking internal noise and sampling into account. Others have found evidence more consistent with some “smart” subsampling strategies (e.g., Marchant, Simons, & de Fockert,
2013; Maule & Franklin,
2016). On the other hand, some findings have provided support for more parallel mechanisms—for example, outliers tend to be discounted in extracting the mean (Haberman & Whitney,
2010), inconsistent with a straightforward random subsampling account; and in pairs of arrays where only a single item changes between arrays, participants can recognize the change in the mean without knowing which item changed (Haberman & Whitney,
2011; see also Ward, Bear, & Scholl,
2016).