September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Learning to identify visual signals of intentionality
Author Affiliations & Notes
  • Mohan Ji
    University of Wisconsin - Madison, Department of Psychology
  • Emily Ward
    University of Wisconsin - Madison, Department of Psychology
    McPherson Eye Research Institute
  • C. Shawn Green
    University of Wisconsin - Madison, Department of Psychology
    McPherson Eye Research Institute
  • Footnotes
    Acknowledgements  McPherson Eye Research Institute
Journal of Vision September 2021, Vol.21, 2248. doi:https://doi.org/10.1167/jov.21.9.2248
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mohan Ji, Emily Ward, C. Shawn Green; Learning to identify visual signals of intentionality. Journal of Vision 2021;21(9):2248. doi: https://doi.org/10.1167/jov.21.9.2248.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The human visual system not only provides information about the physical state of the environment, but it also provides information about the causal structures that underlie it. One example is our ability to perceive animacy and intentionality. Even when viewing displays of simple shapes on computer screens, we tend to interpret certain cues (e.g., self-propulsion) as being strongly indicative of animacy or intentionality. One of the tasks that involve strong cues of animacy and intentionality is predatory chasing behavior. Here we employed an experimental setup that tests participants’ ability to detect and identify visual signals of intentionality using a chasing task in a noisy environment. Participants viewed videos where one red dot (“sheep”) is trying to escape from one chasing white dot (“wolf”) that is hiding among 19 other randomly moving white dots. The participants’ task was to detect and identify which of the 20 white dots was the wolf. The videos themselves were generated using data from another group of participants, who actually either played as the sheep against a computer-controlled wolf or played against each other, one human as sheep and one as wolf. The videos were further categorized based on whether the sheep was eventually caught or not. We found that when participants viewed these videos, they were more accurate at detecting the wolf in human vs. computer trials than in human vs. human trials. We also found that participants’ detection accuracy varied significantly across trials even within the same condition. For each video we quantified the extent to which the wolf showed “direct chasing” behavior. We found that the more direct chasing behaviors in a trial, the easier it was for participants to identify the wolf. Our results show that participants could identify intentional agents in noisy environments based on certain behaviors utilized by the agents.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×