December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Unintended Consequences of Trying to Help: Augmented Target Recognition Cues Bias Perception
Author Affiliations & Notes
  • Catherine Konold
    University of Utah Psychology Department
  • Michael Geuss
    Combat Capabilities Development Command Army Research Laboratory, Human Research and Engineering Directorate Aberdeen Proving Ground USA
  • Joshua Butner
    University of Utah Psychology Department
  • Mirinda Whitaker
    University of Utah Psychology Department
  • Ryan Murdock
    University of Utah Psychology Department
  • Jeanine Stefanucci
    University of Utah Psychology Department
  • Sarah Creem-Regehr
    University of Utah Psychology Department
  • Trafton Drew
    University of Utah Psychology Department
  • Footnotes
    Acknowledgements  ARL W911NF2020093
Journal of Vision December 2022, Vol.22, 3362. doi:https://doi.org/10.1167/jov.22.14.3362
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Catherine Konold, Michael Geuss, Joshua Butner, Mirinda Whitaker, Ryan Murdock, Jeanine Stefanucci, Sarah Creem-Regehr, Trafton Drew; Unintended Consequences of Trying to Help: Augmented Target Recognition Cues Bias Perception. Journal of Vision 2022;22(14):3362. https://doi.org/10.1167/jov.22.14.3362.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Rapid advances in computer vision mean that artificial intelligence-aided systems may be able to provide helpful suggestions for a variety of complex visual tasks. One example of this approach is Augmented Target Recognition (ATR) where Soldiers in the field may be aided in a threat detection task by a system that indicates potential threats. It is currently unclear how ATR systems may bias performance in instances where the system is incorrect. This has important implications for the eventual adoption of such systems. In this study, participants were tasked to rapidly identify 2-5 armed individuals in 100 generated images. Participants completed the task with aid from a Liberal system (i.e., more false alarms, fewer misses), a Conservative system (i.e., fewer false alarms, more misses), or no additional information. We compared target detection performance, operationalized as d’, for both ATR conditions relative to the no ATR condition. Both ATR systems improved the speed of threat detection, but d’ improvement was negligible. The system induced sizable bias which varied depending on the criterion of the ATR system. Participants with Liberal ATR were much more likely to miss targets that were missed by the ATR. Meanwhile, participants with the Conservative ATR were much more likely to identify incorrectly marked, unarmed people (false-alarms) as threats. These results suggest that ATR cues induce automation bias, which may be due to attentional capture upon first viewing the scene with ATR markings. In a second experiment, we created an ‘interactive’ ATR (iATR) where the classification of target was provided to the user after they queried a target. This approach greatly reduced the evidence of bias induced by the ATR markings in Experiment 1, but did not result in a net benefit in terms of target detection.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×