Abstract
During ensemble representation of real-world objects do we rely only on pure "retinal" sizes or do some other properties of real-world objects matter? Real-world size is automatically encoded during object representation and influence on object size estimation. How could real-world size influence the mean size estimation of the set?
We collected and tested stimuli set of images of real-world objects with small (e.g., cups, locks) and large real-world size (e.g., cars, houses). There were 15 categories with small and 15 categories with large real-world size (24 images per category).
In Experiment 1, participants were instructed to estimate the mean size of set of eight objects belonging to one category. As a baseline, we presented the same sets, but instead of original images we used rotated black silhouettes. They had the same sizes as original images, but they had no color and texture, making them unidentifiable. Analysis with correction to baseline demonstrated that the mean size of set of objects from “small” categories is underestimated in comparison to the mean size of objects from “large” categories.
We conducted Experiment 2 to test whether the effect will be present when we eliminated the common category of the set and left only one common feature of the set – real-world size. Experiment 2 used the same procedure and design as Experiment 1, but we used three types of sets: objects from different “small” categories, objects from different “large” categories and “mixed” – objects from “small” and “large” categories. Analysis with correction to baseline demonstrated no significant differences between conditions.
We conclude that real-world size could influence mean size estimation only if the set consisted of objects from one category, thus we propose that this bias is caused by the categorical level of set, but is not observed on the level of individual representations.