Abstract
Observers asked to indicate the smaller of two images on a screen are slowed down by mismatching real-world sizes of the depicted objects (e.g. when the smaller image shows a tree and the larger image a broccoli). This kind of familiar-size Stroop effect is also observed for ‘texform’ images, which only preserve unrecognizable shape features of objects and thus is thought to depend on bottom-up processing. Here, we put this hypothesis to a strong test and juxtapose it with the possibility that learned knowledge of object size can cause the familiar-size Stroop effect. We paired shape-matched images from small vs. large objects (e.g., broccoli vs. tree) and – crucially – created for each pair a customized ambiguous drawing, compatible with either categorical interpretation (e.g., either a broccoli or a tree). In two rating studies, we investigated the perceptual similarity of drawings and object images, as well the perceived properties of these stimuli (e.g., shape and real-world size). Results confirmed that our ambiguous drawings are compatible with either perceptual interpretations. Then, we trained two different groups of observers in an online study to interpret the drawings as either small or large objects (n = 28 and 30, respectively). After training, we tested whether the learned interpretation biased behavioral responses in a familiar-size Stroop task. Indeed, we observed opposite Stroop effects for identical drawings, depending on the previously learned interpretation, F(1, 56) = 10.11, p = 0.002. In sum, our ambiguous drawings can isolate effects of different categorical interpretations for identical input. Behavioral results suggest that such learned interpretations can dominate the automatic processing of familiar size, contrasting a purely bottom-up interpretation of the familiar-size Stroop effect. We plan to corroborate these results in a lab-based study and test whether they extend to ventral stream responses using fMRI.