Abstract
Measurement of domain-general object recognition ability (o) requires the minimisation of domain-specific influences on scores. For this purpose, it is useful to combine multiple tasks which differ in task demands and stimuli. One approach is to model o as a latent variable explaining performance on such a battery of tasks, however, time and sample requirements limit usage of this approach. Alternatively, an aggregate measure of o can be obtained by averaging z-scores from each task. Using data from Sunday et al. (2022), we demonstrate that aggregate scores from just two object recognition tasks with differing stimuli and task demands provides a good approximation (r = .79) of factor scores calculated from a larger confirmatory factor model in which six tasks and three object categories were used. Indeed, some task combinations produced correlations of up to r = .87 with factor scores. We then revise these measures to reduce testing time, and additionally develop an odd-one-out task, using a unique object category on each trial. Greater diversity of task demands and objects should provide more accurate measurement of domain-general ability. To test the reliability and validity of our measures, 163 participants completed our three object recognition tasks on two occasions, spaced one month apart. Providing the first evidence that o is stable over time, our 15-minute aggregate o measure demonstrated good test-retest reliability (r = .77) at this interval, and hierarchical regression showed that the stability of o could not be completely accounted for by intelligence, perceptual speed, and early visual processing. Using structural equation modelling we show that our measures all load significantly onto the same latent variable, and also demonstrate that as a latent variable, o is highly stable (r = .94) over a month. Our measures are freely available to use, and can be downloaded at https://github.com/OPLabVanderbilt/Ojs/tree/main/standalone