December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Automatic simulation of unseen physical events
Author Affiliations & Notes
  • Tal Boger
    Yale University
    Johns Hopkins University
  • Chaz Firestone
    Johns Hopkins University
  • Footnotes
    Acknowledgements  This project was funded by NSF BCS 2021053 awarded to C.F.
Journal of Vision December 2022, Vol.22, 3637. doi:https://doi.org/10.1167/jov.22.14.3637
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tal Boger, Chaz Firestone; Automatic simulation of unseen physical events. Journal of Vision 2022;22(14):3637. https://doi.org/10.1167/jov.22.14.3637.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When a feature (e.g., the letter T) becomes associated with an object (e.g., a square), we are faster to detect that feature if it later appears on that same object than if it appears elsewhere, even after the object changes location—a foundational result known as the object-specific preview benefit (OSPB). But what if that feature is a physical entity with mass and extent (e.g., a ping-pong ball with a T printed on it), and the object is a container (e.g., a wooden box) in which the ball could slide and roll? Do we (merely) create a static association between the feature and the object? Or do we represent the feature’s location “within” the object, even after the feature disappears from view? Here, we exploit the OSPB to explore how attention automatically represents the physical dynamics of unseen objects. Observers viewed a letter drop into a box, which then moved to another location before abruptly disappearing, revealing the letter in one of several locations; then, observers reported whether or not it had changed (e.g., remaining a T, or changing from a T to an L). If object-feature bindings rely on simple association, then subjects should be fastest to detect the feature in the same box-relative location where it last appeared. However, our results showed that facilitation was greatest when the feature appeared where it “should” have been, as predicted by physical simulation of how a ball slides and rolls within a moving box (which would leave it in a different box-relative location than it started). Follow-up experiments ruled out lower-level explanations, including biases toward the screen’s center and to the last screen-relative location where the feature was seen. We suggest that perception automatically simulates the forces acting on unseen objects, such that feature-object bindings incorporate complex physical interactions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×