September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Seeing cloth-covered objects: A case study of intuitive physics in perception, attention, and memory
Author Affiliations & Notes
  • Kimberly W. Wong
    Yale University
  • Wenyan Bi
    Yale University
  • Ilker Yildirim
    Yale University
  • Brian Scholl
    Yale University
  • Footnotes
    Acknowledgements  This project was funded by ONR MURI #N00014-16-1-2007 awarded to BJS.
Journal of Vision September 2021, Vol.21, 2211. doi:https://doi.org/10.1167/jov.21.9.2211
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kimberly W. Wong, Wenyan Bi, Ilker Yildirim, Brian Scholl; Seeing cloth-covered objects: A case study of intuitive physics in perception, attention, and memory. Journal of Vision 2021;21(9):2211. https://doi.org/10.1167/jov.21.9.2211.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We typically think of intuitive physics in terms of high-level cognition, but might aspects of physics also be extracted during lower-level visual processing? In short, might we not only *think* about physics, but also *see* it? We explored this in the context of *covered* objects -- as when you see a chair with a blanket draped over it. To successfully recover the underlying structure of such scenes (and determine which image components reflect the object itself), we must account for the physical interactions between cloth, gravity, and object -- which govern not only the way the cloth may wrinkle and fold on itself, but also the way it hangs across the object's edges and corners. We explored this using change detection: Observers saw two images of cloth-covered objects appear quickly one after the other, and simply had to detect whether the two raw images were identical. On "Same Object" trials, the superficial folds and creases of the cloth changed dramatically, but the underlying object was identical (as might happen if you threw a blanket onto a chair repeatedly). On "Different Object" trials, in contrast, both the cloth and the underlying covered object changed. Critically, "Same Object" trials always had *greater* visual change than "Different Object" trials -- in terms of both brute image metrics (e.g. the number of changed pixels) and higher-level features (as quantified by distance in vectorized feature-activation maps from relatively late layers in a convolutional neural network trained for object recognition [VGG16]). Observers were far better at detecting changes on "Different Object" trials, despite the lesser degree of overall visual change. Just as vision "discounts the illuminant" to recover the deeper property of reflectance in lightness perception, visual processing uses intuitive physics to "discount the cloth" in order to recover the deeper underlying structure of objects.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×