July 2019
Volume 19, Issue 8
Open Access
OSA Fall Vision Meeting Abstract  |   July 2019
Similarities and differences in the representation of real objects versus 2-D planar images and 3-D augmented reality displays: insights from inverse multidimensional scaling
Author Affiliations
  • Desiree Holler
    Department of Psychology, University of Nevada, Reno, Nevada
  • Sara Fabbri
    Department of Psychology, University of Nevada, Reno, Nevada & Department of Psychology, University of Groningen, Netherlands
  • Jacqueline Snow
    Department of Psychology, University of Nevada, Reno, Nevada
Journal of Vision July 2019, Vol.19, 62. doi:https://doi.org/10.1167/19.8.62
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Desiree Holler, Sara Fabbri, Jacqueline Snow; Similarities and differences in the representation of real objects versus 2-D planar images and 3-D augmented reality displays: insights from inverse multidimensional scaling. Journal of Vision 2019;19(8):62. https://doi.org/10.1167/19.8.62.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Images of objects are commonly used as proxies to access the organization of conceptual knowledge in the human brain. However, recent studies from our laboratory have highlighted differences between images and real objects at the level of their neural representation, as well as in their contribution to memory, attention and decision-making. Asking an observer to make judgments about the similarities among a set of objects can provide unique insights into the nature of the underlying neural representations of those objects in human cortex (Mur et al, 2013). Here, we used inverse multidimensional scaling (Kriegeskorte and Mur 2012) to investigate the subjective properties that observers use to characterize objects during free-sorting, when the stimuli are displayed as 2-D images of objects, 3-D augmented reality objects, and real objects. Observers were asked to arrange 21 different items so that the distances between them reflected their perceived dissimilarities. One group of participants sorted 2-D images on a computer monitor using a mouse drag-and-drop action; another group manually sorted AR displays of the same objects; the remaining group manually arranged real-world objects. Critically, participants were free to use any dimension they liked to group the items, and were asked to report their sorting principle to the experimenter prior to sorting the stimuli. By correlating models based on the various sorting criteria, and the dissimilarity matrix obtained by the behavioral ratings, we identified the properties that observers used to separate the items in each format. Using stepwise linear regression, we found that both common and different criteria were used to arrange the stimuli across formats. For example, for all formats, the location where the item is typically encountered in everyday life was a salient dimension, as was elongation. However, unlike 2-D images, real objects and AR stimuli were sorted based on their physical size. Critically, only real objects were sorted based on their weight. These results suggest that although images and real objects are represented similarly with respect to their semantic properties, images lack the representational richness of their real-world counterparts.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×