Abstract
Two-dimensional (2-D) images of objects are commonly used as proxies to access the organization of conceptual knowledge in the brain. However, various studies from our lab highlight differences between images and real objects at the neural level (Snow et al, 2011), as well as in their contribution to memory (Snow et al., 2014), attention (Gomez et al., in press), and decision-making (Romero et al., in press). Asking an observer to judge the similarities among a set of objects can provide unique insights into the underlying neural representations in human cortex (Mur et al, 2013). Here, we used inverse multidimensional scaling (Kriegeskorte and Mur 2012) to investigate the properties that observers use to characterize objects displayed as 2-D images versus real objects. Observers arranged 21 different items so that the distances between them reflect their perceived dissimilarities. Half of the participants (n=68) arranged 2-D images on a computer monitor; the other half arranged the corresponding real-world exemplars manually on a table top. Critically, participants were not given a criterion to sort the objects, but were free to use any dimension they liked to group the items. By correlating models based on the various sorting criteria with the dissimilarity matrix obtained by the behavioral ratings, we identified the properties that observers used to separate the items within each format. Stepwise linear regression showed that both common and different criteria were used to arrange images and real objects. For example, for both formats, the location where the item is typically encountered was a salient dimension, as was elongation. However, unlike 2-D images, real objects were also sorted based on properties relevant for their manipulation, specifically, their physical size and weight. These results suggest that despite their similarly with respect to the semantic properties, 2-D images lack the representational richness of their real-world counterparts.
Meeting abstract presented at VSS 2018