August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
"Things" versus "Stuff" in the Brain
Author Affiliations & Notes
  • Vivian C. Paulun
    Massachusetts Institute of Technology
  • RT Pramod
    Massachusetts Institute of Technology
  • Nancy Kanwisher
    Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  This work was supported by the German Research Foundation (grant PA 3723/1-1 to VCP), NIH grant DP1HD091947 to NK, a US/UK ONR MURI project (Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning), NSF STC Grant CCF-1231216, and NSF Project 2124136.
Journal of Vision August 2023, Vol.23, 5096. doi:https://doi.org/10.1167/jov.23.9.5096
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vivian C. Paulun, RT Pramod, Nancy Kanwisher; "Things" versus "Stuff" in the Brain. Journal of Vision 2023;23(9):5096. https://doi.org/10.1167/jov.23.9.5096.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In a seminal paper published two decades ago, Adelson (2001) noted that "Our world contains both things and stuff, but things tend to get the attention." This remains the case today in the field of cognitive neuroscience. The large number of publications using fMRI to explore the lateral occipital complex (LOC) have focused almost exclusively on the role of this region in extracting the 3D shape of Things, without asking whether this region may also respond to Stuff with no fixed shape like honey, sand, or water. Similarly, investigations of the "physics network" previously implicated in visual intuitive physics (Fischer et al, 2016) have to date tested only Things, even though the physics of Stuff plays a comparable role in everyday life. Here, we asked whether LOC and the physics network are engaged when observing Stuff. We created 120 photorealistic short movie clips of four different computer-simulated substances—liquids and granular Stuff, and non-rigid and rigid Things—interacting with other objects, e.g., colliding with obstacles. The four types of videos, as well as scrambled versions of each, were presented in a blocked fMRI design while subjects (N=6) performed an orthogonal color-change detection task. Independently-localized LOC and the physics network showed higher activation for all materials than for scrambled controls (p < .05), whereas the opposite pattern was found for V1 (p < .05). Most importantly, we find that the physics network responded more to rigid and non-rigid Things than liquid and granular Stuff (p < .05), whereas LOC responded at least as strongly to Stuff as to Things. These findings suggest that the physics network may be more engaged in the physics of Things than Stuff, whereas LOC is not restricted to extracting the fixed 3D shape of Things but is equally engaged by Stuff with dynamically changing shapes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×