September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Detecting pursuit in dynamic visual scenes
Author Affiliations & Notes
  • Maria Kon
    U.S. Naval Research Laboratory
  • Sangeet Khemlani
    U.S. Naval Research Laboratory
  • Andrew Lovett
    U.S. Naval Research Laboratory
  • Footnotes
    Acknowledgements  This research was funded by a National Research Council Research Associateship awarded to MK, and data were collected by Knexus Research Corporation.
Journal of Vision September 2024, Vol.24, 255. doi:https://doi.org/10.1167/jov.24.10.255
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maria Kon, Sangeet Khemlani, Andrew Lovett; Detecting pursuit in dynamic visual scenes. Journal of Vision 2024;24(10):255. https://doi.org/10.1167/jov.24.10.255.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To determine whether one object is pursuing another, people must track them over time and compare their locations. Currently, no computational theory exists to explain this behavior. To support creating such a theory, we developed a novel paradigm for tracking how people detect pursuit. Participants (n = 94) observed videos of uniquely colored circles moving on a black screen and a) judged whether a red circle was pursuing any other circle, and b) indicated the color of the pursued circle. This task is innovative in that it both: a) permitted participants to respond as soon as they made a judgment, which provided information about the timecourse of processing; and b) had participants select the circle they thought was being pursued even when they incorrectly judged that pursuit was occurring, which provided information about the types of errors made. This information was compared to task performance by a computational model we developed within a cognitive architecture. This model operates by detecting the red circle, tracking it and computing its trajectory, scanning along that trajectory to detect candidates for pursuit, and integrating that information over time. The model makes accuracy and reaction time predictions, and model results are largely consistent with empirical findings: it, like humans, is slower for absent than present trials, and it, like humans, is slower for larger sets of circles. Likewise, it predicts when a video is reliably more difficult, and it makes human-like detection errors. The study and modeling results suggest that observers who erroneously detect that a pursuit relation is present do so because some non-pursued circle happens to move parallel to the red circle for a short duration; and those who fail to detect a pursuit relation do so because they render a judgment too early. Future work will examine how systematic these errors are.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×