August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Cooperative computation of shape and material from motion
Author Affiliations
  • Katja Doerschner
    Department of Psychology, Bilkent University, Ankara, Turkey
  • Di Zang
    University of Minnesota, Minneapolis, USA
  • Daniel Kersten
    Department of Psychology University of Minnesota, Minneapolis, USA
  • Paul Schrater
    Department of Psychology University of Minnesota, Minneapolis, USA, and Department of Psychology, University of Minnesota, Minneapolis, USA
Journal of Vision August 2009, Vol.9, 51. doi:10.1167/9.8.51
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Katja Doerschner, Di Zang, Daniel Kersten, Paul Schrater; Cooperative computation of shape and material from motion. Journal of Vision 2009;9(8):51. doi: 10.1167/9.8.51.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In previous work we showed that specular rotating superellipsoids of varying corner-roundedness have characteristic optic flow patterns that predict observers' shininess ratings: namely, more-rounded shapes are perceived as less shiny than cuboidal shapes. However, previous behavioral results also show a strong covariation between percepts of shape and material - shiny objects- judged matte also appeared non-rigid. This suggests that material perception involves the simultaneous inference of shape and material, where material properties include both reflectivity and elasticity. In this work we investigate the computations underlying the perception of shape and material from motion.

Previous work in computer vision provides theory for estimating shape given known material properties (e.g. structure-from-motion and shape-from-specular-flow). We incorporate these results into an “analysis by synthesis” framework that postulates that the visual system has high-level models for inferring the shape of objects in matte rigid motion sequences (e.g. structure-from-motion), matte-elastic, shiny-rigid and possibly shiny-elastic sequences. We show that errors in the model fit scan be used to infer the most likely material type for the sequence. In particular, using novel measures of consistency and error of reconstructed shapes across time, we show that the pattern of fit errors, for a model assuming rigid matte objects, can be used to predict whether the object is both shiny or matte and rigid or non-rigid.

For example, an object's material and rigidity can be accurately estimated for slowly deforming matte surfaces. Interestingly, however, low curvature shiny objects generate structure-from-motion model fit errors that are more similar to non-rigid matte objects. From these results, we hypothesize that human observers may use a similar analysis-by-synthesis strategy to compute shape and material from motion. The hypothesis predicts perceptual errors on a range of motion stimuli that we compare to human judgments.

Doerschner, K. Zang, D. Kersten, D. Schrater, P. (2009). Cooperative computation of shape and material from motion [Abstract]. Journal of Vision, 9(8):51, 51a, http://journalofvision.org/9/8/51/, doi:10.1167/9.8.51. [CrossRef]
Footnotes
 This work was supported by NIH grant EY015261. Partial support has been provided by the Center for Cognitive Sciences, University of Minnesota.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×