Abstract
When confronted with many visual items, people can represent the variance of them accurately and rapidly. However, how the visual system computes the variance remains unclear. To investigate this, we examined which of the variability measures such as the range, standard deviation, and weighted standard deviation could account for variance perception better. Participants watched two Gabor arrays of various orientations and judged which array was more heterogeneous. In Experiment 1, we manipulated orientations except those near the extreme orientations to change the standard deviation while keeping the range constant. Results showed that even when two arrays had similar ranges, the perceived variance was higher for the array with a larger standard deviation, indicating that people represent the variance using the standard deviation rather than the range. In Experiment 2, we manipulated the deviance of extreme orientations to change the range of orientations while the standard deviations were kept similar across conditions. We found that even when two arrays had similar standard deviations, the perceived variance was smaller for the array of a wider range with a few extreme orientations. It indicates that people consider extreme orientations less than others when computing the standard deviation. In Experiment 3, we increased the contrast of orientations either near the mean or the extreme orientation of the set so that they were more salient than the rest. Although the actual range and standard deviation of the orientations were constant across conditions, the perceived variance was higher when salient orientations were near the extreme orientation than when they were near the mean, indicating that people consider salient orientations more than others when computing the standard deviation. In summary, these results suggest that people compute the weighted standard deviation by considering some items more or less than others to represent orientation variance.