Abstract
When presented with just a single exemplar of an unfamiliar object, we have certain visual intuitions about what other objects from the same class might look like. This is an impressive inference because it is radically under-constrained. Objects can vary along many dimensions, so how do we work out which features of the object are important and which are likely to vary across samples? To investigate this, 17 participants were shown 2D silhouettes of 8 exemplar objects from different classes, which were created to have one or multiple noticeable features (e.g. Sharp corners, distinctive concavities). For each of these, they were asked to draw 12 new objects belonging to the same class, on a tablet computer, resulting in 1632 drawn shapes. The drawings reveal that participants derived very specific inferences about the objects, varying some features substantially, while assiduously preserving others across variants. Despite substantial variations within each class, participants were highly consistent in the features that they chose to include in the variants. Another group of participants viewed the drawings and were asked to rate similarities and assign the variants to the categories. The results reveal high agreement between observers, suggesting robust and consistent inferences. We also analysed the shapes using dozens of image-computable shape metrics (e.g., area, perimeter, curvature statistics, fourier descriptors). Using MDS, the drawn shapes were compared to each other and the exemplars, revealing systematic variations in the features that defined each class. Together, these findings suggest participants infer sophisticated generative models of object appearance from single exemplars. We suggest that when presented with a shape, the visual system parses it, identifies perceptually significant features, and represents the features in a parametric way, providing a means to draw new samples from the internal representation (by varying the parameters of the feature representation).