August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Measuring Object Recognition Ability: Reliability, Validity, and the Aggregate z-score Approach.
Author Affiliations & Notes
  • Conor J. R. Smithson
    Vanderbilt University
  • Jason K. Chow
    Vanderbilt University
  • Ting-Yun Chang
    Vanderbilt University
  • Isabel Gauthier
    Vanderbilt University
  • Footnotes
    Acknowledgements  This work was supported by the David K. Wilson Chair Research Fund (Vanderbilt University)
Journal of Vision August 2023, Vol.23, 5110. doi:https://doi.org/10.1167/jov.23.9.5110
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Conor J. R. Smithson, Jason K. Chow, Ting-Yun Chang, Isabel Gauthier; Measuring Object Recognition Ability: Reliability, Validity, and the Aggregate z-score Approach.. Journal of Vision 2023;23(9):5110. https://doi.org/10.1167/jov.23.9.5110.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Measurement of domain-general object recognition ability (o) requires the minimisation of domain-specific influences on scores. For this purpose, it is useful to combine multiple tasks which differ in task demands and stimuli. One approach is to model o as a latent variable explaining performance on such a battery of tasks, however, time and sample requirements limit usage of this approach. Alternatively, an aggregate measure of o can be obtained by averaging z-scores from each task. Using data from Sunday et al. (2022), we demonstrate that aggregate scores from just two object recognition tasks with differing stimuli and task demands provides a good approximation (r = .79) of factor scores calculated from a larger confirmatory factor model in which six tasks and three object categories were used. Indeed, some task combinations produced correlations of up to r = .87 with factor scores. We then revise these measures to reduce testing time, and additionally develop an odd-one-out task, using a unique object category on each trial. Greater diversity of task demands and objects should provide more accurate measurement of domain-general ability. To test the reliability and validity of our measures, 163 participants completed our three object recognition tasks on two occasions, spaced one month apart. Providing the first evidence that o is stable over time, our 15-minute aggregate o measure demonstrated good test-retest reliability (r = .77) at this interval, and hierarchical regression showed that the stability of o could not be completely accounted for by intelligence, perceptual speed, and early visual processing. Using structural equation modelling we show that our measures all load significantly onto the same latent variable, and also demonstrate that as a latent variable, o is highly stable (r = .94) over a month. Our measures are freely available to use, and can be downloaded at https://github.com/OPLabVanderbilt/Ojs/tree/main/standalone

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×