September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Similarities and differences in the representation of real objects, 2-D images, and 3-D augmented reality displays: Insights from inverse multidimensional scaling
Author Affiliations & Notes
  • Desiree E Holler
    The University of Nevada, Reno
  • Sara Fabbri
    The University of Nevada, Reno
    University of Groningen, Netherlands
  • Jacqueline C. Snow
    The University of Nevada, Reno
Journal of Vision September 2019, Vol.19, 221a. doi:https://doi.org/10.1167/19.10.221a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Desiree E Holler, Sara Fabbri, Jacqueline C. Snow; Similarities and differences in the representation of real objects, 2-D images, and 3-D augmented reality displays: Insights from inverse multidimensional scaling. Journal of Vision 2019;19(10):221a. https://doi.org/10.1167/19.10.221a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Images of objects are commonly used as proxies to understand the organization of conceptual knowledge in the human brain. However, recent studies from our laboratory have highlighted differences between images and real objects at the level of neural representations, as well as in their contribution to memory, attention, and decision-making. Asking an observer to make judgments about the similarities among a set of objects can provide unique insights into the nature of the underlying representations of those objects in human cortex (Mur et al, 2013). Here, we used inverse multidimensional scaling (Kriegeskorte and Mur 2012) to investigate whether the subjective properties that observers use to characterize objects during free-sorting are dependent on display format. Observers arranged 21 different objects so that the distances between them reflected their perceived dissimilarities. Critically, one group of participants sorted 2-D images of the objects on a computer monitor using a mouse drag-and-drop action; another group manually sorted objects presented using AR; the remaining group manually sorted real-world exemplars. Participants were free to use any dimension they liked to group the items. By correlating models based on the various sorting criteria, and the dissimilarity matrix obtained by the behavioral ratings, we identified the properties that observers used to separate the items in each format. We found that object representations depended on the format in which objects were displayed. 2-D images of objects were sorted primarily with respect to the conceptual property of typical location. AR objects were sorted according to their physical size and weight properties, but less so according to conceptual properties. Real objects, unlike 2-D images and AR stimuli, were sorted with respect to both their conceptual (typical location) and physical properties (size, weight). Real-world objects are coded in a richer, more multidimensional, property space compared to computerized images.

Acknowledgement: NIH grant R01EY026701 awarded to J.C.S 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×