September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Comparing apples to oranges to bananas: A big data approach to understanding the joint influences of stimulus properties, trial history, and individual differences
Author Affiliations & Notes
  • Audrey Siqi-Liu
    The George Washington University
  • Emma M. Siritzky
    The George Washington University
  • Chloe Callahan-Flintoft
    DEVCOM Army Research Laboratory
  • Justin N. Grady
    The George Washington University
  • Kelvin S. Oie
    DEVCOM Army Research Laboratory
  • Stephen R. Mitroff
    The George Washington University
  • Dwight J. Kravitz
    The George Washington University
  • Footnotes
    Acknowledgements  US Army Research Laboratory Cooperative Agreements #W911NF-21-2-0179, #W911NF-23-2-0210, & #W911NF-23-2-0097
Journal of Vision September 2024, Vol.24, 577. doi:https://doi.org/10.1167/jov.24.10.577
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Audrey Siqi-Liu, Emma M. Siritzky, Chloe Callahan-Flintoft, Justin N. Grady, Kelvin S. Oie, Stephen R. Mitroff, Dwight J. Kravitz; Comparing apples to oranges to bananas: A big data approach to understanding the joint influences of stimulus properties, trial history, and individual differences. Journal of Vision 2024;24(10):577. https://doi.org/10.1167/jov.24.10.577.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Research into the mechanisms of visual search has often explored how trial-level differences in the stimuli (e.g., set size, target salience) affect performance. However, performance is also influenced by trial history effects (e.g., priming, hysteresis) and individual differences (e.g., variation in task capacity). Each of these three factors (stimuli, history, individual differences) has been examined independently, but understanding their comparative importance and how they may interact is highly informative in a wide range of applied scenarios, for example in making resource allocation decisions between user-interface design, task training, or personnel selection, respectively. Unfortunately, it has traditionally been extremely difficult to evaluate the comparative contribution to task performance of each factor simultaneously, as this requires a large dataset with sufficient variation in trial-by-trial stimuli attributes, exposure to task conditions, and participant characteristics. The current study leveraged a massive dataset of visual search performance (~3.8 billion trials, ~15.5 million individuals) from a mobile game version of an airport security visual search task (Airport Scanner, Kedlin Co.) to quantify and compare the variance in performance accounted for by three factors of performance: 1) trial-by-trial stimuli search array features (e.g., current trial target identity and array set size), 2) trial history specific to each individual’s experience of the task (e.g., cumulative exposure to target), and 3) individual differences in task performance aptitude (e.g., participant-specific target hit rate). Each of these three factors was found to strongly contribute to performance, but, importantly, the nature and magnitude of their influences varied. In particular, individual differences were found to be an extremely large, and relatively stronger, predictor of performance variance. Precise quantifications of each factor’s comparative contribution across several task contexts are provided.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×