September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Hybrid foraging meets navigation: Can augmented reality improve performance in real world search tasks?
Author Affiliations
  • Hayden Schill
    Brigham & Women's Hospital
  • Farahnaz Wick
    Brigham & Women's HospitalHarvard Medical School
  • Matthew Cain
    U.S. Army, Natick Soldier Research & Development Center
  • Jeremy Wolfe
    Harvard Medical School
Journal of Vision September 2018, Vol.18, 6. doi:https://doi.org/10.1167/18.10.6
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hayden Schill, Farahnaz Wick, Matthew Cain, Jeremy Wolfe; Hybrid foraging meets navigation: Can augmented reality improve performance in real world search tasks?. Journal of Vision 2018;18(10):6. https://doi.org/10.1167/18.10.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Classic visual search often involves a single target in artificial, static displays. However, real world search tasks can involve looking for multiple instances of multiple types of targets (hybrid foraging) among distractors. In this study, we investigated how the human "search engine" performs hybrid foraging while actively navigating through a 3D city terrain. We also investigate whether augmented reality, in the form of navigational cues, provide a benefit in this kind of complex, real-world search. In this videogame-style task, observers memorized either 4, 8, or 16 target objects and were given two competing tasks: navigate to the endpoint with a time deadline and collect as many targets as possible. Navigation cues were either an 'arrow' presented at corners of streets directing them to the endpoint, or a 'waypoint' cue, numerically indicating the distance to the endpoint, with the number decreasing if they moved in the correct direction. In our analysis, we focused on the cost of memory load, type of navigational cues, and the pattern of target selection (rate at which targets were picked, collecting multiple instances of a target type or 'runs'). We found that navigational cues hindered search performance. Observers who were not given any navigational cues picked up more targets than those given navigational cues (None: 0.492 targets/second, Arrow: 0.467, Waypoint: 0.393, p < .02). The rate at which targets were picked decreased as memory load increased (p < .01) and when navigational cues were provided (p < .01). The number of runs decreased significantly as the memory load increased but was not significantly different between the navigation conditions. These results provide a first look into a complex search task in a dynamic display and how the human search engine copes with navigational cues while performing visual search.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×