August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
What does learning look like? Inferring epistemic intent from observed actions
Author Affiliations & Notes
  • Sholei Croom
    Johns Hopkins University
  • Hanbei Zhou
    Johns Hopkins University
  • Chaz Firestone
    Johns Hopkins University
  • Footnotes
    Acknowledgements  NSF BCS 2021053
Journal of Vision August 2023, Vol.23, 5585. doi:https://doi.org/10.1167/jov.23.9.5585
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sholei Croom, Hanbei Zhou, Chaz Firestone; What does learning look like? Inferring epistemic intent from observed actions. Journal of Vision 2023;23(9):5585. https://doi.org/10.1167/jov.23.9.5585.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Beyond recognizing objects, faces, and scenes, we can also recognize the actions of other people. Accordingly, a large literature explores how we make inferences about behaviors such as walking, reaching, pushing, lifting, and chasing. However, in addition to actions with physical goals (i.e., trying to *do* something), we also perform actions with epistemic goals (i.e., trying to *learn* something). For example, someone might press on a door to figure out whether it is locked, or shake a box to determine its contents (e.g., a child wondering if a wrapped-up present contains Lego blocks or a teddy bear). Such ‘epistemic actions’ raise an intriguing question: Can observers tell, just by looking, what another person is trying to learn? And if so, how fine-grained is this ability? We filmed volunteers playing two rounds of a ‘physics game’ in which they shook an opaque box to determine either (a) the number of objects hidden inside, or (b) the shape of the objects hidden inside. Then, an independent group of participants watched these videos (without audio) and were instructed to identify which videos showed someone shaking for number and which videos showed someone shaking for shape. Across multiple task variations and hundreds of observers, participants succeeded at this discrimination, accurately determining which actors were trying to learn what, purely by observing the box-shaking dynamics. This result held both for easy discriminations (e.g., 5-vs-15) and hard discriminations (e.g., 2-vs-3), and both for actors who correctly guessed the contents of the box and actors who failed to do so — isolating the role of epistemic *intent* per se. We conclude that observers can visually recognize not only what someone wants to do, but also what someone wants to know, introducing a new dimension to research on visual action understanding.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×