Purchase this article with an account.
Brett Fajen, Oliver Layton, Robert Wild; Knowing when to give up: Control strategies for choosing whether to pursue or abandon the chase of a moving target. Journal of Vision 2016;16(12):1359. doi: 10.1167/16.12.1359.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
An important but neglected aspect of tasks that involve interception of moving targets on foot is knowing when to stop pursuing a target that is moving too fast to catch. Whether the target is a prey animal in the wild or an opponent on the playing field, chasing an uncatchable target is not only futile but a waste of energy. The aim of this study was to test predictions of various models of how humans choose whether to pursue or abandon the chase of a moving target. Subjects used a steering wheel and foot pedals to pursue a target moving through an open field in a virtual environment, and attempted to intercept the target before it escaped into a thicket of trees (the "safe zone" for the target). They were also instructed to quickly press a button if they perceived that they could not reach the target before it escaped. Target speed, trajectory angle, and initial position were varied such that targets were catchable on some trials and uncatchable on others. After each trial, subjects received a reward (points) if they intercepted the target and a distance penalty based on how far they traveled. The net reward was positive when subjects intercepted the target and negative when they pursued uncatchable targets, but losses were minimized when subjects quickly gave up on uncatchable targets. Subjects' decisions to pursue or give up were consistent with a model that relies on the optically specified minimum speed required to intercept. Models that rely on simpler optical variables that correlate with but do not specify minimum required speed (e.g., change in target bearing angle), were not successful at capturing subjects' decisions. We also compared human behavior against that of an agent that learned a policy through reinforcement learning for maximizing rewards.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only