Abstract
Images of objects are commonly used as proxies to access the organization of conceptual knowledge in the human brain. However, recent studies from our laboratory have highlighted differences between images and real objects at the level of their neural representation, as well as in their contribution to memory, attention and decision-making. Asking an observer to make judgments about the similarities among a set of objects can provide unique insights into the nature of the underlying neural representations of those objects in human cortex (Mur et al, 2013). Here, we used inverse multidimensional scaling (Kriegeskorte and Mur 2012) to investigate the subjective properties that observers use to characterize objects during free-sorting, when the stimuli are displayed as 2-D images of objects, 3-D augmented reality objects, and real objects. Observers were asked to arrange 21 different items so that the distances between them reflected their perceived dissimilarities. One group of participants sorted 2-D images on a computer monitor using a mouse drag-and-drop action; another group manually sorted AR displays of the same objects; the remaining group manually arranged real-world objects. Critically, participants were free to use any dimension they liked to group the items, and were asked to report their sorting principle to the experimenter prior to sorting the stimuli. By correlating models based on the various sorting criteria, and the dissimilarity matrix obtained by the behavioral ratings, we identified the properties that observers used to separate the items in each format. Using stepwise linear regression, we found that both common and different criteria were used to arrange the stimuli across formats. For example, for all formats, the location where the item is typically encountered in everyday life was a salient dimension, as was elongation. However, unlike 2-D images, real objects and AR stimuli were sorted based on their physical size. Critically, only real objects were sorted based on their weight. These results suggest that although images and real objects are represented similarly with respect to their semantic properties, images lack the representational richness of their real-world counterparts.