September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Can material-robust detection of 3D non-rigid deformation be explained by predictive processing through generative models?
Author Affiliations & Notes
  • Shin'ya Nishida
    Cognitive Informatics Lab, Graduate School of Informatics, Kyoto University
    NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation
  • Mitchell van Zuijlen
    Cognitive Informatics Lab, Graduate School of Informatics, Kyoto University
  • Yung-Hao Yang
    Cognitive Informatics Lab, Graduate School of Informatics, Kyoto University
  • Jan Jaap van Assen
    Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology
  • Footnotes
    Acknowledgements  Supported by JSPS Kakenhi JP20H05957 and a Marie-Skłodowska-Curie Actions Individual Fellowship (H2020-MSCA-IF-2019-FLOW).
Journal of Vision September 2024, Vol.24, 430. doi:https://doi.org/10.1167/jov.24.10.430
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shin'ya Nishida, Mitchell van Zuijlen, Yung-Hao Yang, Jan Jaap van Assen; Can material-robust detection of 3D non-rigid deformation be explained by predictive processing through generative models?. Journal of Vision 2024;24(10):430. https://doi.org/10.1167/jov.24.10.430.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Depending on the optical material property of the object (e.g., matte, glossy, transparent), the optical flow generated by a non-rigid deformation of a 3D object dramatically changes. Nevertheless, a recent study (van Zuijlen et al., VSS2022) shows that the sensitivity to detect deformation of a rotating object is similar for matte and glossy objects, and only slightly worse for transparent objects. What makes deformation perception robust to material changes? One possibility is that the visual system constructs a generative model for each object that can correctly predict how the image should change if the object rigidly moves, being able to detect deformation when there is a significant deviation from the prediction. According to this hypothesis, the deformation detection sensitivity would be impaired when extra image deviations from the model predictions are additionally produced by unusual global movements in the surrounding lightfield. In the experiment, the target object was an infinite knot stimulus rotating around a vertical axis, rendered with one of four optical properties (dot-textured matte, glossy, mirror-like, and transparent). The object was deformed by an inward pulling force in seven levels of intensity (including a rigid condition). Using Maxwell Renderer, the movie of each object was rendered under one of three lightfield conditions: static, imploding, or rotating. The object’s background was black-masked to make the lightfield change directly invisible to observers. Observers performed a 2-IFC task to choose which of the two stimuli (one being always rigid) deformed more. The results do not support the prediction made by the generative model: light-field manipulation had no significant influence on the deformation detection threshold, nor on the effect of material on the threshold. The results rather support the idea that the visual system effectively ignores the complex flow produced by material-dependent features (e.g., highlights, refractions) in deformation detection.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×