Abstract
Evidence from imaging, electrophysiology, and behavior supports the idea that objects are represented as collections of parts, but few studies have investigated how the spatial relationships between parts are represented. Such relations are critical since changing them can change the object, just as changing the order of phonemes can change the meaning of a word (e.g., “rough” and “fur”). One way to define relationships between objects' parts is to specify a medial axis within each part. These axes can define an invariant structure of an object, since relationships between medial axes will be constant despite variation in view. Previously, we showed that multi-voxel patterns in intermediate visual areas can distinguish groups of objects with distinct medial axis structures, even if the objects' parts differ and the orientations of the objects vary (Lescroart & Biederman, VSS 2009). Would naïve human subjects spontaneously judge different novel objects with the same axis structures to be similar, despite variation in other dimensions? In an “inverse multidimensional scaling” paradigm, naïve subjects rated the similarity of a set of novel objects that varied in medial axis structure, in the parts that composed the objects, and in overall orientation. On each trial subjects viewed a display of five objects, with instructions to place similar objects close together, and dissimilar objects farther apart. The distance between the centers of each pair of objects was recorded as the dependent measure. In contrast to prior sorting studies, in which subjects grouped objects based on a single part or dimension, non-metric multi-dimensional scaling revealed that subjects prioritized both the objects' parts and the objects' medial axis structures in their similarity judgments. These results, together with the fMRI results, suggest that axis structures are automatically computed by the visual system, and spontaneously used as the basis for perceptual similarity judgments.
NSF BCS 04-20794, 05-31177, 06-17699 to I.B.