Abstract
When a morph face is produced with equal physical contributions from a typical parent face and an atypical parent face, the morph is judged to be more similar to the atypical parent. This discontinuity between physical and perceptual distance relationships, called the “atypicality bias” (Tanaka, Giles, Kremen, & Simon, 1998), has also been demonstrated with the object classes of birds and cars (Tanaka & Corneille, 2007). The present work tested the hypothesis that the atypicality bias is not a product of static physical properties of typical or atypical exemplars, but emerges only after the category structure of a given stimulus domain (and thus the nature of its typical members) has been learned. Participants were trained to discriminate between two categories of novel shape stimuli (“blobs”) with which they had no pre-experimental familiarity. Although typical and atypical blob exemplars appeared with equal frequency during category training, the typical blobs within a given family were structurally similar to one another, whereas the atypical blobs were dissimilar to each other and to the typical exemplars. The magnitude of the atypicality bias was assessed in a preference task administered pre- and post-training. The blobs elicited no bias prior to category training, but, as predicted, elicited a significant atypicality bias after training. This change in object perception with category learning is considered from the standpoint of theories that represent item similarities in terms of the relative locations of items in a multi-dimensional space. We propose that category learning alters the dimensions of the space, effectively increasing the perceptual distance between the morph and its typical parent, with the result that the morph appears more similar to its atypical parent than to its typical parent.