September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Human judgments of relative 3D pose of novel complex objects
Author Affiliations & Notes
  • Frieder Hartmann
    Justus Liebig University Gießen
  • Katherine R. Storrs
    Justus Liebig University Gießen
  • Yaniv Morgenstern
    Justus Liebig University Gießen
  • Guido Maiello
    Justus Liebig University Gießen
  • Roland W. Fleming
    Justus Liebig University Gießen
    Centre for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
  • Footnotes
    Acknowledgements  This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; project number 222641018-SFB-TRR 135), the International Research Group The Brain in Action (IRTG-1901) and the European Research Consolidator award ‘‘SHAPE’’ (ERC-CoG-2015-682859)
Journal of Vision September 2021, Vol.21, 2873. doi:https://doi.org/10.1167/jov.21.9.2873
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Frieder Hartmann, Katherine R. Storrs, Yaniv Morgenstern, Guido Maiello, Roland W. Fleming; Human judgments of relative 3D pose of novel complex objects. Journal of Vision 2021;21(9):2873. https://doi.org/10.1167/jov.21.9.2873.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A 3D object seen from different viewpoints can elicit vastly different retinal images. Differences between views depend on object geometry and initial pose, rendering relative pose estimation computationally challenging. Still, humans can easily judge object identity across views and estimate the relative pose between them. Here, we sought to measure how accurately observers can estimate pose similarity for 3D objects, and how these judgements are influenced by object geometry and changes in its retinal projection. We first mapped out human judgements of relative viewpoints using a multi-arrangement task. On each trial, observers (N=16) were asked to spatially arrange 31 views of one of three novel or three familiar 3D objects by viewpoint similarity. The resulting arrangements broadly matched ground-truth viewpoint differences with deviations that were consistent across observers (i.e. representational similarity analysis revealed correlations with ground-truth below the noise ceiling across objects). We implemented several candidate computational models, based on 2D image features or object geometry, and evaluated their ability to predict human judgements. Strategies using 2D features failed to account for human data. However, a metric based on the union and intersection of visible surface area across views (‘Surface IoU’) predicted human judgments on par with ground-truth. In order to maximise our power to differentiate between candidate strategies, we selected triads of viewpoints for individual objects over which pairs of models strongly disagreed (e.g. where similar changes in viewing angle produced very different changes in image pixels). We presented these triads in a two-alternative forced-choice experiment in which participants judged which of two views appeared closest to a target view. Across triad judgements and free arrangements, we gathered a rich dataset of human viewpoint perception for many objects and viewpoints that allows us to evaluate the ability of computational models to predict human strategies for judging relative viewpoint.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×