Open Access
Article  |   August 2023
Test–retest reliability of eye tracking measures in a computerized Trail Making Test
Author Affiliations
Journal of Vision August 2023, Vol.23, 15. doi:https://doi.org/10.1167/jov.23.8.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lukas Recker, Christian H. Poth; Test–retest reliability of eye tracking measures in a computerized Trail Making Test. Journal of Vision 2023;23(8):15. https://doi.org/10.1167/jov.23.8.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The Trail Making Test (TMT) is a frequently applied neuropsychological test that evaluates participants’ executive functions based on their time to connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B). Test performance is associated with various cognitive functions ranging from visuomotor speed to working memory capabilities. However, although the test can screen for impaired executive functioning in a variety of neuropsychiatric disorders, it provides only little information about which specific cognitive impairments underlie performance detriments. To resolve this lack of specificity, recent cognitive research combined the TMT with eye tracking so that eye movements could help uncover reasons for performance impairments. However, using eye-tracking-based test scores to examine differences between persons, and ultimately apply the scores for diagnostics, presupposes that the reliability of the scores is established. Therefore, we investigated the test–retest reliabilities of scores in an eye-tracking version of the TMT recently introduced by Recker et al. (2022). We examined two healthy samples performing an initial test and then a retest 3 days (n = 31) or 10 to 30 days (n = 34) later. Results reveal that, although reliabilities of classic completion times were overall good, comparable with earlier versions, reliabilities of eye-tracking-based scores ranged from excellent (e.g., durations of fixations) to poor (e.g., number of fixations guiding manual responses). These findings indicate that some eye-tracking measures offer a strong basis for assessing interindividual differences beyond classic behavioral measures when examining processes related to information accumulation processes but are less suitable to diagnose differences in eye–hand coordination.

Introduction
To assess neurocognitive functionality, neuropsychologists challenge individuals in certain aspects of their performance using neuropsychological tests. Using these tests for neuropsychological diagnostics in applied research and clinical settings requires that measures of test performance can be obtained and interpreted efficiently and easily. Therefore, most readily available measures, such as reaction times and errors, are the variables most frequently used to derive test scores quantifying performance. One test widely used as an easily applicable screening of cognitive performance is the Trail Making Test (TMT) (Reitan, 1958). In this test, participants connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B) via pencil on paper. Based on the time needed to complete the sequence, the participant’s level of executive functioning is evaluated (Bowie & Harvey, 2006; Salthouse, 2011; Sánchez-Cubillo et al., 2009). This powerful but broad measure is frequently applied to examine and diagnose multiple neuropsychiatric disorders (e.g., Ashendorf, Jefferson, O'Connor, Chaisson, Green, & Stern, 2008; Muir et al., 2015; O'Rourke et al., 2011; Wölwer & Gaebel, 2002) or investigate executive functions of healthy individuals (e.g., Hwang et al., 2016). However, a number of different cognitive functions are implicated in completing the sequence, so that completion time cannot differentiate between specific neurocognitive processes that underlie performance. To address this problem, recent studies have introduced measures beyond completion times that more specifically reflect cognitive processes and mechanisms and therefore offer a more detailed understanding of impaired and intact cognitive functionality. That is, they introduced a computerized version of the test that included eye tracking (Linari, Juantorena, Ibáñez, Petroni, & Kamienkowski, 2022; Recker, Foerster, Schneider, & Poth, 2022; Wölwer & Gaebel, 2002). This addition allows, for example, assessing the number and length of participants’ eye fixations or the amplitude of their saccadic eye movements, both of which offer new information about cognitive processes, such as attentional selection of visual information and the allocation of cognitive processing resources (e.g., Hutton, 2008; Liversedge & Findlay, 2000; Salthouse & Ellis, 1980). Ongoing technological advancements such as portable eye trackers or virtual reality goggles (Foerster, Poth, Behler, Botsch, & Schneider, 2016; Foerster, Poth, Behler, Botsch, & Schneider, 2019) and their combination with eye tracking make it more and more feasible to apply experimental tests with eye tracking in neuropsychological testing scenarios. In the context of the TMT, several recent studies have already demonstrated the utility of eye tracking to understand task performance in terms of specific cognitive functions (Linari et al., 2022; Recker et al., 2022; Wölwer & Gaebel, 2002). However, to use such new measures for individual diagnostics in research and applied settings, it is important to first establish that the measures are reliable (Mollon, Bosten, Peterzell, & Webster, 2017; Wilmer, 2017). This seems particularly important, because the reliability determines how strong the measures could maximally correlate with measures from other tests or diagnostic assessments (e.g., neurological assessments). As such, the reliability of the measures lays a necessary foundation for later validations based on correlations with other tests and clinical diagnostics (Cronbach & Meehl, 1955). Therefore, here we aim to assess the test–retest reliability of a computerized TMT including a detailed neurocognitive profile of different eye-tracking-based measures first introduced by Recker et al. (2022)
The traditionally conducted TMT is a paper-and-pencil test that assesses the level of executive functioning of participants. It is made up of two parts, each thought to reflect a set of different cognitive functions. The TMT-A consists of a sequence of numbers from 1 to 25 which participants are asked to connect in ascending order. Part A of the test is associated with measures of processing and motor speed and visual search (Bowie & Harvey, 2006; Crowe, 1998; Salthouse, 2011; Sánchez-Cubillo et al., 2009). The TMT-B consists of a sequence of numbers from 1 to 13 and letters from A to L which participants are asked to connect in ascending order alternating between the two sets. Part B is associated with measures of task switching and cognitive flexibility (Arbuthnott & Frank, 2000; Bowie & Harvey, 2006; Kortte, Horner, & Windham, 2002; Sánchez-Cubillo et al., 2009). Deviations in completion times and errors in both test halves can therefore be a useful indicator of impairments in the executive functions of patients, also supported by matching neural correlates revealed by a number of lesion studies (Kopp et al., 2015; Muir et al., 2015; Reitan, 1958; Varjacic, Mantini, Demeyere, & Gillebert, 2018). Executive functions cover a range of cognitive domains influencing our everyday behavior (Cohen, 2017; Diamond, 2013). However, in contrast to the range of affected functions, the TMT is limited by its restricted set of outcome measures; therefore, conclusions about impairments of cognitive functions are rather unspecific. For example, slowed performance in a sequence can stem from multiple sources of subprocesses, such as difficulties with locating targets, tracking the current position, performing the actual movement, etc. Amending the test with the capabilities provided by eye tracking can shed light on subprocesses influencing the overall performance. Eye movements are closely linked to perceptual and attentional processes (Findlay & Gilchrist, 2008; Land & Tatler, 2009; Schütz, Braun, & Gegenfurtner, 2011) and are grounded by a variety of cognitive (Bundesen, Habekost, & Kyllingsbæk, 2011; Hutton, 2008; Wolfe, 2021) and neurophysiological (Henderson & Choi, 2015; Krauzlis, Goffart, & Hafed, 2017; Schall & Thompson, 1999) theories. Therefore, investigating eye movements can help discriminate reasons for emerging patterns in performances and thus increase the conceptual specificity of a test. 
Up until now, only a few studies have been conducted applying eye tracking in the context of the TMT. Next to a study using eye movements as a substitute for hand movements in patient groups with motor impairments (Hicks et al., 2013), so far to the best of our knowledge only three studies have analyzed eye movements in computerized versions of the TMT while still using manual actions to perform the task. To disentangle the impaired performance of schizophrenic patients in the TMT (e.g., Laere, Tee, & Tang, 2018; Periáñez et al., 2007), Wölwer and Gaebel (2002) conducted a study using a computerized TMT while also recording participants’ gaze. Analyzing the spatial alignment of cursor and eye positions they found that differences in performance (i.e., completion times) were the result of differences in what they called “planning periods,” periods spent planning the next movement as opposed to periods monitoring the current mouse cursor movement. That is, schizophrenic patients differed in the time and efficiency spent planning their movements to the next target. Linari et al. (2022) further expanded the examined eye movement measures including saccade durations and saccade amplitude. However, they found that, of the examined eye movement measures, only the number of fixations differed between TMT-A and TMT-B. Segregating task performance into periods, similar to Wölwer and Gaebel (2002), they found that the difference in the number of fixations came from a prolonged period of exploration and planning in the TMT-B. Finally, in a recent study, Recker et al. (2022) introduced a computerized TMT that adds eye-tracking measures more commonly applied in natural task studies, such as different fixation types and the eye–hand span (e.g., Foerster, Carbone, Koesling, & Schneider, 2011; Land & Hayhoe, 2001; Land, Mennie, & Rusted, 1999). They also found differences in the number of fixations between test halves A and B. Analyses of the fixation types revealed that these differences came from a change in searching fixations (i.e., fixations on previous or future targets) as opposed to guiding fixations (i.e., fixations on the current target). Taken together, these studies highlight the additional insight into more specific cognitive processes provided by eye tracking in the TMT. 
In their study, Recker et al. (2022) also included an additional manipulation of participants’ task set. They instructed participants to perform both test halves once emphasizing speed and once emphasizing accuracy. Using this manipulation, they investigated which of their included test scores was relatively (in-)dependent of this factor, granting additional specificity in terms of their relationship with cognitive control. In the original TMT, the difference in completion times between test halves A and B is frequently calculated to provide a score associated with the cognitive control abilities of participants (Sánchez-Cubillo et al., 2009). Using the newly included manipulation of emphasizing speed or accuracy introduces the possibility of providing an additional measure of cognitive control. In contrast to the difference score, which depends on a stimulus-driven contrast between conditions (i.e., sequence of numbers vs. sequence of alternating numbers and letters), this measure is based solely on an internal switch between task sets in TMT-A and TMT-B. The ability to shift between the emphasis on speed or accuracy seems to present one of the most fundamental internal priorities we can set affecting almost any given task (Carrasco & McElree, 2001; Dietze & Poth, 2022; Heitz, 2014; Rae, Heathcote, Donkin, Averell, & Brown, 2014; Wickelgren, 1977). Calculating a score based on this internal criterion can therefore extend the capabilities of the TMT to measure abilities of cognitive control without the need for further stimulus-dependent alternations of the task. 
To not only improve the general understanding of performance-determining mechanism but also use newly introduced scores for the study of interindividual differences, it is necessary to assess such scores in terms of their test–retest reliability (Goodhew & Edwards, 2019; Mollon et al., 2017; Wilmer, 2008). In the past, most common approaches in vision science and experimental psychology have put an emphasis on describing effects for the “standard observer.” That is, variance introduced by the individual would be discarded as noise and dismissed for further analyses to focus on the effects of experimental conditions and their related cognitive processes. Paradigms that have emerged from this research tradition often are designed to maximize experimental effects, and little is known about the reliability of their respective eye-tracking measures. Recent studies have shown that some paradigms produce stable, replicable results but are not suited for the examination of interindividual differences (Clark, Birch-Hurst, Pennington, Petrie, Lee, & Hedge, 2022; Hedge, Powell, & Sumner, 2018). On the other hand, other research investigating eye-tracking measures for the study of interindividual differences has provided promising results. An early study by Andrews and Coppola (1999) found that fixation durations and saccadic amplitudes correlated in two clusters of tasks, which they framed as active and passive. This stability across tasks was repeatedly demonstrated (e.g., Boot, Becic, & Kramer, 2009; Castelhano, Mack, & Henderson, 2009; Rayner, Li, Williams, Cave, & Well, 2007) and later found to also withstand time (Henderson & Luke, 2014). In the reading literature, there has been an effort to demonstrate the reliability and stability of different eye movement measures over time (Carter & Luke, 2018; Henderson & Luke, 2014; Staub, 2021). An extensive study by Bargary, Bosten, Goodbourn, Lawrance-Owen, Hogg, and Mollon (2017) suggested that eye-tracking measures can provide an idiosyncratic signature by which individuals could be identified because of high interindividual stability. These studies illustrate on the one hand the potential to use eye tracking for the study of interindividual differences while on the other hand stressing the importance of first establishing their reliabilities. 
Because the reliabilities of eye tracking and related additional scores in the TMT are largely unknown, we want to focus on this research question. In the present study, we therefore examine the test–retest reliability of the test scores included and introduced by Recker et al. (2022) in their computer-based version of the TMT. Furthermore, we include a new additional test score describing the speed–accuracy tradeoff of participants as an internal measure of cognitive control in the examination of executive functions. We examine two samples each collected across two sessions spanning multiple days. Sample A contains a retest after 3 days of initial testing. Sample B contains a retest after 10 to 30 days of initial testing. The different lag times allow an inspection of relatively short- and mid-term reliability and stability of our scores. For example, training effects could affect performances with repeated testing. Different lag times allow us to see whether and how long these effects might persist, and if they differ between test intervals how this affects reliability. To analyze, we calculated the intraclass correlation coefficient (ICC) as an indicator of test–retest reliability and utilized additional measures of agreement based on Bland–Altman calculations (Bland & Altman, 1999; Haghayegh, Kang, Khoshnevis, Smolensky, & Diller, 2020). The ICC indicates whether we can consider a measure to be relatively stable. A high ICC indicates that differences between persons are stable across time points; that is, their respective position (rank) is stable within a reference group but gives no information about whether absolute values changed between measurements. Therefore, we included metrics of Bland–Altman calculations that indicate whether and how much the absolute values changed. This becomes important when single individuals are tested repeatedly, such as to investigate improvements from one point to the next which is often relevant in clinical settings. We expect results on completion times in the new version of the TMT to resemble the reliabilities of the original version of the test. Previous work on eye movements in the TMT (Hicks et al., 2013; Linari et al., 2022; Recker et al., 2022) and a variety of different cognitive tasks (e.g., Andrews & Coppola, 1999; Henderson & Luke, 2014; Rayner et al., 2007) found that fixation durations and saccade amplitudes were relatively stable across examined conditions and tasks. This might also be reflected in high stabilities across sessions. As for the rest of the eye tracking-related measures, it is unknown whether previous differences between conditions also produce stable individual differences between persons. 
Methods
Participants
The dataset is divided into two samples gathered across two larger research projects. In both projects, the experiment was conducted as part of a selection of multiple experiments. Within the first project (sample A), 31 healthy participants (26 female, five male) were tested in two sessions spread 3 days apart. Participants in this group were between 19 and 38 years old (median = 23; interquartile range [IQR] = 4) and tested for normal or corrected-to-normal vision. Within the second project (sample B), 34 healthy participants (24 female, 10 male) were tested. Due to the Covid-19 pandemic, participants completed their second session with variable intervals between sessions (i.e., 10–30 days).1 On average, the time between sessions was 16 days. Participants in this group were between 18 and 53 years old (median = 24; IQR = 4) and were also tested for normal or corrected-to-normal vision. 
We report and calculated ICCs for sample A and sample B individually. That is, calculations are based on measurements of sessions one and two of the respective sample. 
The analysis of the dataset as part of this study was preregistered via the Open Science Framework (https://osf.io/fwhjb/). Participants gave written informed consent before their participation and received course credit or monetary compensation as a reward. The study was approved by the local ethics committee of Bielefeld University within the context of both larger research projects. 
Apparatus and stimuli
The apparatus, stimuli, and procedures were the same (same lab facilities, same experimental code) for the samples in both projects. Participants sat down in a dimly lit room in front of a ViewSonic G90FB CRT monitor (ViewSonic, Brea, CA) with a resolution of 1024 × 768 pixels (corresponding to 36 × 27 cm physical dimensions) with a refresh rate of 100 Hz (preheated for at least 5 minutes (Poth & Horstmann, 2017). Each participant's head position was stabilized using a chin-rest placed 71 cm away from the screen. They controlled the experiment handling a Logitech RX250 computer mouse (Logitech, Lausanne, Switzerland) with their right hand. The position of the mouse was sampled with 100 Hz. Movements of their right eye were recorded with a sample rate of 1000 Hz via a tower-mounted EyeLink 1000 Eye Tracker (SR Research, Ottawa, ON, Canada). The experiment ran on a Windows 7 desktop PC (Microsoft, Redmond, WA) and was written in Python 3.6 using packages psychopy (Peirce et al., 2019) and pylink (SR Research) for stimulus presentation and control of the eye tracker, respectively. 
Stimuli appeared on a uniform, gray background (30 cd/cm2). Targets were black, unfilled circles with 1.35° diameter. Numbers or letters making up the sequence were presented within the circles and written in black, using the Arial font set to 20 points for the number/letter height. When a target was hit correctly (i.e., the participant clicked the current target within the circle), it changed its color from black (1.5 cd/cm2) to white (93.2 cd/cm2). In each trial, stimuli were randomly placed across the screen using a 5 × 5 grid with cells being 4.32° × 4.32° in size. Each grid cell contained one stimulus, and the minimum distance between stimuli was defined to be at least 1.35°. 
Procedure
Participants received written on-screen instructions on the overall task of clicking through a sequence of numbers (TMT-A) or numbers and letters (TMT-B). Per session, each participant completed 10 trials divided into two blocks. They first completed five trials of TMT-A and then five trials of TMT-B, including one shortened training trial (eight targets) each. Before each block they completed a nine-point grid calibration for the eye tracker. Before each trial, they were instructed on-screen to complete the next sequence as quickly or as accurately as possible (i.e., hit the targets as centrally as possible). The order of blocks was the same for all participants (e.g., first TMT-A, then TMT-B), and the order of instructions (e.g., speed or accuracy emphasis) was randomized across participants. Each instruction was given two times per block; that is, participants completed two trials emphasizing speed and two trials emphasizing accuracy in TMT-A and TMT-B, respectively. They then started the trial by simultaneously looking at and clicking on a centrally presented fixations cross. Subsequently, they clicked through the presented sequence of 25 numbers (1, 2, 3, …, 25) or 25 alternating numbers and letters (1, A, 2, B, …, 13) (see Figure 1 for examples of stimulus displays). If they hit a target, its color changed from black to white. If they missed a target, the circumcircle remained black, indicating that the number was not successfully checked. Clicks were counted as correct if they fell within the circumcircle of the number/ letter. Note that no paths were drawn on the screen, contrary to original versions of the TMT. This also allowed for paths between targets to cross. Participants only clicked through the sequence. Overall, the experiment took approximately 10 minutes. During their second sessions participants performed the exact same experiment again; that is, the order of trials (e.g., instructions) and spatial distributions of the targets were the same as during session one. 
Figure 1.
 
Example stimulus displays for TMT-A (A) and TMT-B (B).
Figure 1.
 
Example stimulus displays for TMT-A (A) and TMT-B (B).
Dependent variables
We included the following dependent variables in our analyses of reliabilities: The completion time of participants was defined as the time between initiation of a trial (i.e., clicking the fixation stimulus) and the first click on the last target of a sequence. The speed–accuracy measure represents the slope of regressing the reaction time per target on the accuracy of the click relative to the center of that target. For this purpose, we determined the relative accuracy of a click between 1 (i.e., target hit exactly in the center of its circumcircle) and 0 (i.e., target missed). Predicting this accuracy with the reaction times of the respective targets allowed us to compare speed–accuracy trade-offs between experimental conditions via their slopes (Heitz, 2014). Based on the EyeLink algorithm, fixations and saccades are detected using a velocity threshold of 30° × s−1 and an acceleration threshold of 8000° × s−2. Blinks and events preceding or following a blink by 50 ms were excluded. We used median fixation durations and the overall number of fixations for our analysis, excluding fixations shorter than 50 ms. The number of fixations was further broken down into the number of searching and the number of guiding fixations. The number of searching fixations is given by all fixations that fall on an object that is not the current target of the sequence. In contrast, the number of guiding fixations describes the number of fixations on current targets of the sequence (Foerster & Schneider, 2015; Land & Tatler, 2009). Either way, fixations were counted as falling on an object if they were within a 3.25° radius of an object (the target). In terms of saccades, we used the median saccade amplitude, excluding saccades shorter than 0.1° to avoid microsaccades (Martinez-Conde, Macknik, Troncoso, & Hubel, 2009). Finally, we looked at the scanpath length as the overall path the eyes covered during a trial and calculated the eye–hand span as the time between a fixation on a target and the proceeding click (Land & Tatler, 2009). Positive values in this measure indicate that the eyes led the hand. 
Statistical analysis
All trials were included in the analysis. We analyzed reliabilities across samples in two ways (see also Figure 2 for a visualization of the different metrics). First, we calculated the ICCs and determined 95% confidence intervals (CIs) based on a two-way, mixed-effects model for average agreement (ICC[A,2]) (cf. Koo & Li, 2016). We used this model because we were interested in the test–retest reliability for a task where participants performed more than one trial in each investigated condition. ICCs were calculated for each experimental condition ([TMT-A] TMT-B; speed and accuracy instructions) in sample A and sample B. Classifications of ICCs are as follows: excellent (>0.80), good (0.60–0.80), moderate (0.40–0.60), or poor (<0.40) (Cicchetti & Sparrow, 1981; Clark et al., 2022; Landis & Koch, 1977). The results provide an estimate of the relative stability of the score in terms of its interindividual stability. Second, we provide measures based on Bland–Altman analyses of the absolute agreement of two measures (Bland & Altman, 1999). The bias describes the mean difference between the two measures. Because the value is computed by subtracting measure two from measure one, in our case positive values indicate larger measures in session one of a sample. Thus, the bias describes the absolute stability of a test score. A bias of zero would indicate that on average a test score did not change across sessions. The limits of agreement (LOAs) specify the range covering 95% of the measured differences between one and two. That is, although the absolute stability of a score could be good (e.g., its bias is zero), individual measures could still disperse widely around this bias. The LOAs therefore indicate how stable the bias is across the sample. And, finally, we provide the fixed slope of the Bland–Altman analysis—that is, the slope of the linear regression predicting the difference between measures by the mean of measures. This gives an estimate of possible systematic differences between time points which might be characteristic of specific ranges of variables (e.g., participants that are slow on average might differ less between time points). To facilitate qualitative conclusions about these systematic differences, we report the significance of the slope. That is, if a slope is significant, there is a systematic relationship between individuals’ performances and across-session effects. Analyses were conducted in R 4.1.3 (R Core Team, 2019). For computation of the ICCs, we used package irr (Gamer, Lemon, & Singh, 2019). For the computation of Bland–Altman statistics, we used the package blandr (Datta, 2017). In terms of slopes of the Bland–Altman regression we report results as significant if p < 0.05; there were no adjustments for multiple comparisons. 
Figure 2.
 
Illustrative example of chosen reliability metrics. The left panel shows an approximate depiction (i.e., dashed line) of an intraclass correlation representing the relative agreement between two sessions. The right panel shows a Bland–Altman plot including its descriptive metrics representing the absolute agreement between sessions. The bias is the mean difference between sessions across all participants. The limits of agreement (LOAs) mark the values that contain 95% of the individual differences between sessions. The weight of the regression (b1 highlighted in red) from difference between sessions ∼ mean completion time can uncover systematic differences arising between sessions. Gray areas represent 95% CIs. The illustrated data represent the accuracy condition in TMT-A of sample A.
Figure 2.
 
Illustrative example of chosen reliability metrics. The left panel shows an approximate depiction (i.e., dashed line) of an intraclass correlation representing the relative agreement between two sessions. The right panel shows a Bland–Altman plot including its descriptive metrics representing the absolute agreement between sessions. The bias is the mean difference between sessions across all participants. The limits of agreement (LOAs) mark the values that contain 95% of the individual differences between sessions. The weight of the regression (b1 highlighted in red) from difference between sessions ∼ mean completion time can uncover systematic differences arising between sessions. Gray areas represent 95% CIs. The illustrated data represent the accuracy condition in TMT-A of sample A.
Results and discussion
The results for sample A, including the respective measures of reliability and an overview of the descriptive statistics for each test score for the initial test, as well as the retest, are given in Table 1. Table 2 contains the same results for sample B. 
Table 1.
 
Sample A results on test–retest reliabilities of scores in the computerized TMT. Sessions were 3 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 1.
 
Sample A results on test–retest reliabilities of scores in the computerized TMT. Sessions were 3 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 2.
 
Sample B results for test–retest reliabilities of scores in the computerized TMT. Sessions were 10 to 30 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 2.
 
Sample B results for test–retest reliabilities of scores in the computerized TMT. Sessions were 10 to 30 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Precision estimation
Given that we aimed to describe reliabilities through estimates of a coefficient (i.e., ICCs) rather than test coefficients against some value, we examined the precision of our estimation. Therefore, we calculated the widths of the 95% CIs that we could achieve given our sample sizes for a selection of ICCs (Clark et al., 2022; Doros & Lew, 2010). Adapting the procedure of Clark et al. (2022), we simulated the data of 100,000 individuals with a predefined ICC. We then estimated the resulting CIs 10,000 times for our sample sizes (i.e., 30 in sample A and 34 in sample B). The results depicted in Table 3 demonstrate that the CIs were wide for low ICCs but narrower for higher ICCs. Because we predominantly wanted to identify test scores with high test–retest reliabilities, our samples should provide a valid assessment for this aim. Furthermore, the examination of two independent samples with differing test–retest intervals is beneficial because the time between testing occasions can be a critical factor (Mollon et al., 2017). Similar results in both samples therefore improve the estimation of test-retest reliabiltiy. 
Table 3.
 
Precision estimations for given sample sizes. Based on the range of 95% CIs for three assumed values of ICCs we should be able to find, given the sample sizes of sample A (n = 30) and sample B (n = 33).
Table 3.
 
Precision estimations for given sample sizes. Based on the range of 95% CIs for three assumed values of ICCs we should be able to find, given the sample sizes of sample A (n = 30) and sample B (n = 33).
Completion time
As expected, completion times were generally faster in TMT-A than in TMT-B and slower in conditions emphasizing accuracy compared with speed. As evident by the biases, performance in TMT-A was relatively stable across sessions, whereas completion times in TMT-B tended to be faster in later sessions, irrespective of times between sessions and spread across a wider range (cf. Tables 1 and 2). This parallels results on training effects studied in the TMT indicating that performance improves with repeated testing, especially in TMT-B (Buck, Atkinson, & Ryan, 2008; McCaffrey, Ortega, & Haase, 1993). Also, we found a significantly positive slope in TMT-B for speed-instruction completion times in Sample B and accuracy-instruction completion times in Sample A respectively, suggesting that slower participants showed larger training effects. However, these results were only present in one sample respectively and not consistent across time points. Results for the ICCs indicated moderate to excellent test–retest reliabilities with TMT-A (0.71 ≤ ICC ≤ 0.91), which provided more stable results than TMT-B (0.64 ≤ ICC ≤ 0.82). This result is comparable with ranges found in other versions of the TMT (Bracken, Mazur-Mosiewicz, & Glazek, 2019; Park & Schott, 2021), although slightly worse than expected for TMT-B. 
Speed–accuracy measure
The new speed–accuracy score as a measure of internal cognitive control yielded interesting results. Surprisingly, the measure was relatively constant across conditions. Participants did not differ in their speed–accuracy trade-offs whether they emphasized speed or the accuracy of their actions. Additional analyses indicated that they indeed performed more accurately or quickly in the respective conditions. However, their overall trade-off represented in our chosen measure exhibited no difference. Unfortunately, this seemingly locked setting did not represent trait-like characteristics; test–retest reliabilities in both samples were poor (0.00 ≤ ICC ≤ 0.48). 
Fixation duration
Fixation durations were stable across all conditions and time points. The absolute agreement across sessions was constant, and we found no indication of relevant practice effects or systematic biases. Also, we found ICCs that could be classified as excellent in every condition (0.82 ≤ ICC ≤ 0.91). 
Saccade amplitude
As with the fixation durations, saccade amplitudes gave stable results across all time points. Except for the speed condition in TMT-A at T2, there was no indication of practice effects or systematic biases. Here, a negative slope suggests that participants with smaller amplitudes across sessions performed larger eye movements on their second session (b1 = –0.44). However, because this effect was only present in one condition at one time point, we cannot infer any regularities. Overall, saccadic amplitudes showed good test–retest reliability in TMT-A (0.65 ≤ ICC ≤ 0.76) and good to excellent test–retest reliability in TMT-B (0.79 ≤ ICC ≤ 0.83). 
Number of fixations
Participants performed more fixations in TMT-B than in TMT-A. This finding is consistent with previous studies on eye tracking in the TMT (Hicks et al., 2013; Linari et al., 2022; Recker et al., 2022). Across all conditions, we furthermore found a practice effect in later sessions, especially pronounced in TMT-B. Two significant slopes of the Bland–Altman regression in the speeded conditions (sample B–TMT-A; sample A–TMT-B) indicated that this practice effect was larger for participants with overall higher numbers of fixations. Still, the ICCs indicate good test–retest reliabilities across conditions (0.61 ≤ ICC ≤ 0.80). 
Number of guiding fixations
We found a slightly greater number of guiding fixations (i.e., fixations on the current target of the sequence) in TMT-B than in TMT-A. Across sessions, the number remained relatively constant; however, the ICC results indicate no interindividual stability of the number across multiple sessions (0.22 ≤ ICC ≤ 0.61). Comparing this finding with earlier studies, the absolute stability signaled by the Bland–Altman statistics resembles the stabilities across testing conditions found by Recker et al. (2022). Considering the overall low number of guiding fixations could hinder this absolute stability translating to the interindividual level. Because the range of the measure is restricted, the results cannot reliably differentiate between persons (Dietze, Recker, & Poth, 2023; Hedge et al., 2018). 
Number of searching fixations
The number of searching fixations provided results similar to those for the overall number of fixations. Participants performed more searching fixations(i.e., time locating future targets) in TMT-B than in TMT-A. Repeated sessions also led to less of these fixations, especially in TMT-B. Furthermore, we found a significant slope indicating larger learning effects for participants with overall greater numbers of searching fixations in the speeded condition of TMT-B in sample A. ICCs were also comparable with the values found for the overall number of fixations, but they were slightly worse and more dispersed. Values ranged from moderate to good (0.48 ≤ ICC ≤ 0.80). 
Eye–hand span
The eye–hand span, indicating the temporal distance between eye and hand/cursor movements, was comparable between TMT-A and -B but expectedly longer when participants were instructed to emphasize accuracy over speed. Results on the stability across sessions, however, were quite diverse. Seemingly, eye–hand spans got longer in repeated sessions of TMT-A but they decreased in TMT-B. However, in both test halves, the range of values was comparably large. This mixed picture of results was confirmed by the results on test–retest reliabilities in terms of the ICCs. Eye–hand spans in the accuracy trials were overall more reliable, with good to excellent ICCs (0.69 ≤ ICC ≤ 0.81). In the speeded trials, these values ranged from poor to moderate (0.31 ≤ ICC ≤ 0.66). Finally, regression within the Bland–Altman statistics indicated three instances of possibly systematic biases in practice effects. That is, participants with overall larger eye–hand spans tended to show smaller spans in repeated sessions in certain conditions. 
Scanpath length
The overall path participants covered with their eye movements was again greater in TMT-B than in TMT-A. Also, the greater practice effects in the TMT-B that were evident in the earlier test scores were repeated here. The ICCs were comparable across all conditions, with moderate to good values ranging between 0.54 and 0.65. Here, also, there was a tendency for systematic biases in three conditions, indicating larger practice effects for participants with overall longer scanpaths. 
General discussion
The aim of the present study was to assess the test–retest reliabilities of different test scores in an eye-tracking version of the TMT. Cognitive assessments using the most frequently applied version of the TMT mostly rely on the examination of completion times in the two test halves A and B (Bowie & Harvey, 2006). However, considering the number of affected cognitive functions that contribute to test performance the specificity of conclusions based on the usually provided test scores is limited. Recent studies demonstrated how scores based on eye movements can shed light on specifics that determine differences in observed performances (Linari et al., 2022; Recker et al., 2022; Wölwer & Gaebel, 2002). Using a recently introduced version of the TMT by Recker et al. (2022), we examined whether scores of this test version not only increase our general understanding of the TMT but also capture reliable interindividual differences with regard to their stability over time (for a summary of the estimated ICCs, see Figure 3). First, we found that the reliabilities of completion times were good to excellent, comparable with earlier versions of the test. Of the additionally included eye-tracking test scores, two provided good to excellent reliabilities (i.e., fixation durations and saccade amplitudes), three provided moderate to good reliabilities (i.e., number of fixations, number of searching fixations, and scanpath length), and the rest presented mixed results depending on experimental conditions (i.e., number of guiding fixations and eye–hand spans). 
Figure 3.
 
Intraclass correlations for dependent variables examined in the TMT. Results for the intraclass correlations for each examined score in the TMT after 3 days (sample A) and 10 to 30 days (sample B). Circles indicate results for TMT-A, triangles indicate results for TMT-B, blue indicates speed instructions, red indicates accuracy instructions, and the vertically written and color-delimited classifications are according to Landis & Koch (1977).
Figure 3.
 
Intraclass correlations for dependent variables examined in the TMT. Results for the intraclass correlations for each examined score in the TMT after 3 days (sample A) and 10 to 30 days (sample B). Circles indicate results for TMT-A, triangles indicate results for TMT-B, blue indicates speed instructions, red indicates accuracy instructions, and the vertically written and color-delimited classifications are according to Landis & Koch (1977).
Based on speed–accuracy instructions included in the test, we computed a test score to provide a measure of internal cognitive control. In contrast to the often-used difference score between test halves, this measure should present a stimulus-independent estimate of the participants’ cognitive control based on their ability to shift between task sets. However, results for this speed–accuracy measure were not consistent in the sense that the measure was stable over time but neither differentiated between conditions nor produced reliable individual differences. This suggests that if the speed–accuracy trade-off was a personal trait, then the absence of interindividual stability should have been due to range restrictions. Thus, to conclude, the speed–accuracy trade-off score offers an interesting new measure for longitudinal assessments. However, it remains a question for future studies as to whether it can be used for individual diagnostics, as well. 
Of the investigated eye movement scores, fixation durations and saccade amplitudes presented the most reliable results in terms of stability of scores across time points and the found interindividual differences. Both measures are frequently examined in various contexts ranging from reading (Rayner, 1998) to the exploration of natural scenes (Dorr, Martinetz, Gegenfurtner, & Barth, 2010) to the performance of everyday tasks (Land & Tatler, 2009); that is, they can be associated with a multitude of different functions related to information accumulation processes. For example, fixation durations can be reflective of ongoing information extraction and action planning, and the length of saccades often varies depending on the context given by task sets (e.g., Hutton, 2008; Liversedge & Findlay, 2000; Mills, Hollingworth, Van der Stigchel, Hoffman, & Dodd, 2011; Salthouse & Ellis, 1980). However, both fixation duration and saccade amplitude have been shown to provide interindividually stable results across multiple tasks (Andrews & Coppola, 1999; Poynter, Barber, Inman, & Wiggins, 2013; Rayner et al., 2007); that is, they provide a seemingly stable measure of interindividual differences even across long periods of time (Henderson & Luke, 2014). Our results indicate that this stability is also present in sequential actions, a type of task previously not included in the examination of stabilities for these measures. However, in case of this investigated task, this stability also means that both measures did not vary across conditions; that is, differences between test halves or instructions did not manifest in participants’ fixation durations or saccadic amplitudes. In studies assessing fixation durations and saccadic amplitudes in the TMT, so far only healthy samples have been investigated (Linari et al., 2022; Recker et al., 2022). Studies on clinical populations could provide cases where the interindividual stability of these measures proves useful, as differences in eye movement profiles have been connected to working memory capacities (Luke, Darowski, & Gale, 2018) or intelligence (Hayes & Henderson, 2018). 
Previous studies investigating differences in eye movements during performance of the TMT have repeatedly found that the number of fixations varies between TMT-A and TMT-B. Furthermore, these studies have pointed out that these differences were due to prolonged periods of orienting, such as increased number of searching fixations (Recker et al., 2022) or planning periods (Linari et al., 2022; Wölwer & Gaebel, 2002). Our results on the test–retest reliabilities of related measures (i.e., number of fixations, number of searching fixations, and the scanpath length) indicate that these differences to some degree can also reliably differentiate among persons. However, we also found that, with repeated testing, they were subject to training effects. Considering their relationship with the overall performance in terms of completion times, this parallels findings that completion times of the TMT, especially TMT-B, improve with repeated testing (Buck et al., 2008; McCaffrey et al., 1993). For one thing, participants’ performance improved because they were more acquainted with the task in their second session. Also, because spatial configurations of the stimuli were repeated between session 1 and session 2, the participants’ memory of the arrangements might also have contributed to these training effects. Beyond completion times, such practice effects could also lead to a homogenization of eye movement behavior (Foerster, 2018; Foerster et al., 2011), which in turn leads to less reliable interindividual differences. 
The eye–hand span presented variable results on test–retest reliability depending on the given instruction to emphasize speed or accuracy. The score varied with the speed–accuracy instructions and presented more interindividually stable results in the accuracy condition than in the speed condition. That is, it produced stable results that differentiated between conditions but mixed results with regard to differentiating between persons. This seeming contradiction between the ability to differentiate between experimental conditions but not persons has been described as the “reliability paradox” (Hedge et al., 2018). Eye movement and related cognitive research offer paradigms that robustly produce experimental effects. For example, paradigms such as the Stroop task (e.g., MacLeod, 1991) produce stable effects for studying response interferences in multiple domains. Because of this robustness, the mechanisms behind such effects are well understood and often supported by accompanying cognitive and neurophysiological theories. Therefore, using these theoretical foundations for differential examinations on an individual level could yield great benefits. However, the experimental paradigms used to study these effects often produce no reliable interindividual differences because they are designed to minimize these differences to produce stable effects for the average observer. That is, without a certain degree of interindividual variability, a task might produce stable experimental effects but cannot reliably differentiate between persons. Our results on the eye–hand span illustrate the importance of examining the test–retest reliabilities of scores when considering their possible use for the study of interindividual differences. Not only can one measure differentiate well between conditions but not between persons, but the former can also vary between examined conditions. In this way, considering the test–retest reliability one might also discover which condition to use when asking questions about either underlying cognitive processes or interindividual differences (Clark et al., 2022). 
In their study on eye movements in the TMT, Recker et al. (2022) used the additionally introduced manipulation of speed–accuracy instructions to test the degree of interindividual variability of the new eye-tracking scores. They benchmarked the individual variability in each score against the variability introduced by the strong, ubiquitous manipulation that is the speed–accuracy trade-off. If variability in a score was dominated by the manipulation, then it was dubbed “experimentally dominated.” In contrast, if an interindividual difference exceeded the variability introduced by the manipulation, then it was dubbed “individually dominated.” Similar to the examination of test–retest reliabilities, this classification helps to identify which scores might be best suited to answer which kinds of questions. However, because it always depicts scores relative to the condition chosen as a benchmark it provides a form of convergent or divergent validity rather than reliability. In this way, our results on the test–retest reliabilities of the examined scores corroborate the analysis of the original study. For example, we showed that not only are fixation durations and saccade amplitudes individually dominated compared to speed-accuracy emphases as found by Recker et al. (2022), but, on top of that, they are also a stable source of interindividual differences across multiple sessions. On the other hand, we can also see that scores reflective of speed–accuracy instructions in the original study (e.g., completion times, number of fixations) can still provide good results in terms of their interindividual reliabilities and thus can be used to study differences between persons regarding this construct. 
Eye-tracking measures bear ambiguous results for the study of interindividual differences. Although they yield great potential for clinical diagnostics (Itti, 2015) and the use of correlational approaches in experimental psychology (Mollon et al., 2017; Wilmer, 2008) a lot of established tasks from cognitive research and vision science poorly translate to these fields of application (Clark et al., 2022; Hedge et al., 2018). Our results parallel this ambiguity in the sense that they once again demonstrate that certain eye-tracking measures can reliably differentiate among persons and produce stable results even with repeated testing (e.g., fixation duration, saccade amplitude). However, measures that differentiate well between experimental conditions (that is, those that produce stable experimental effects) are less reliable (e.g., number of [searching] fixations) or are possibly too restricted in their range to reliably represent individual differences (e.g., number of guiding fixations and speed–accuracy measure). This highlights the importance of investigating the test–retest reliability of scores when aiming to use them for interindividual difference research and, in our case, ultimately apply them in neuropsychological settings. Accordingly, although not conducted with a clinical sample, the present study is a necessary step toward clinical applications. Results of healthy participants do not necessarily transfer to clinical populations; however, to take paradigms beyond the setting of basic research requires creating the best possible premises for their application. Based on their high data quality, test scores with the most reliable results in healthy samples can be taken as the best candidate test scores to assess in clinical samples. Establishing the reliability of test scores in a healthy sample, therefore, is an important and frequently applied technique to identify the most promising scores to relate to other cognitive and possibly impaired functions and create benchmarks for scores in a healthy sample (Anderson, 2013; Gestefeld, Schneider, & Poth, 2023; Klein & Fischer, 2005; Paap & Sawi, 2016). 
Conclusions
To sum up, the present study has established the test–retest reliability of a TMT that includes eye-tracking scores. Although previous studies indicated the usefulness of examining eye movements within the test to increase the general understanding of the determinants of performance, we provide evidence that some of these scores can also be used to study interindividual differences in these performances. Processes related to information accumulation seem best suited for the study of interindividual differences, such as fixation durations, saccade amplitudes, and number of fixations, whereas processes of eye–hand coordination seem to better differentiate between experimental conditions than between persons (e.g., number of guiding fixations, eye–hand span). In this way, we can increase our understanding of eye movements and possible applications of derived scores in the TMT. 
Acknowledgments
The authors thank Rebecca Foerster for her work in acquiring the funding for this project, Nina Held for the helpful discussions and Paula Dornhöfer for her help in collecting the data as part of her thesis. 
Funded by a grant from the Deutsche Forschungsgemeinschaft (418552203 to CHP). We acknowledge support for the publication costs by the Open Access Publication Fund of Bielefeld University and the Deutsche Forschungsgemeinschaft. 
Author contributions: L.R. – Conceptualization, Formal analysis, Software, Visualization, Writing – original draft; C.H.P. – Conceptualization, Supervision, Writing review & editing. 
Data availability: The data, analysis code, and experiment code are available on the Open Science Framework (https://osf.io/fwhjb/). 
Commercial relationships: none. 
Corresponding author: Lukas Recker. 
Email: lukas.recker@uni-bielefeld.de. 
Address: Department of Psychology, Bielefeld University, P.O. Box 100131, Bielefeld D-33501, Germany. 
Footnotes
1  Initially 10 to 14 days were planned as the test-retest interval of sample B. This is also the interval mentioned in the pre-registration of this study. However, because data collection was heavily affected by the covid-19 pandemic, only 21 participants were able to return within 10 – 14 days. Therefore, we decided to extend the interval so that participants could complete their second session at a later point in time. In the end, in sample B 34 participants completed two sessions with a final test-retest interval of 10 – 30 days.
References
Anderson, T. (2013). Could saccadic function be a useful marker of stroke recovery? Journal of Neurology, Neurosurgery & Psychiatry, 84(3), 242–242, https://doi.org/10.1136/jnnp-2012-304481.
Andrews, T. J., & Coppola, D. M. (1999). Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments. Vision Research, 39(17), 2947–2953, https://doi.org/10.1016/S0042-6989(99)00019-X. [PubMed]
Arbuthnott, K., & Frank, J. (2000). Trail Making Test, Part B as a measure of executive control: Validation using a set-switching paradigm. Journal of Clinical and Experimental Neuropsychology, 22(4), 518–528, https://doi.org/10.1076/1380-3395(200008)22:4;1-0;FT518. [PubMed]
Ashendorf, L., Jefferson, A. L., O'Connor, M. K., Chaisson, C., Green, R. C., & Stern, R. A. (2008). Trail Making Test errors in normal aging, mild cognitive impairment, and dementia. Archives of Clinical Neuropsychology, 23(2), 129–137, https://doi.org/10.1016/j.acn.2007.11.005.
Bargary, G., Bosten, J. M., Goodbourn, P. T., Lawrance-Owen, A. J., Hogg, R. E., & Mollon, J. D. (2017). Individual differences in human eye movements: An oculomotor signature? Vision Research, 141, 157–169, https://doi.org/10.1016/j.visres.2017.03.001. [PubMed]
Bland, J. M., & Altman, D. G. (1999). Measuring agreement in method comparison studies. Statistical Methods in Medical Research, 8(2), 135–160, https://doi.org/10.1177/096228029900800204. [PubMed]
Boot, W. R., Becic, E., & Kramer, A. F. (2009). Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search. Journal of Vision, 9(3):7, 1–16, https://doi.org/10.1167/9.3.7. [PubMed]
Bowie, C. R., & Harvey, P. D. (2006). Administration and interpretation of the Trail Making Test. Nature Protocols, 1(5), 2277–2281, https://doi.org/10.1038/nprot.2006.390. [PubMed]
Bracken, M. R., Mazur-Mosiewicz, A., & Glazek, K. (2019). Trail Making Test: Comparison of paper-and-pencil and electronic versions. Applied Neuropsychology: Adult, 26(6), 522–532, https://doi.org/10.1080/23279095.2018.1460371. [PubMed]
Buck, K. K., Atkinson, T. M., & Ryan, J. P. (2008). Evidence of practice effects in variants of the Trail Making Test during serial assessment. Journal of Clinical and Experimental Neuropsychology, 30(3), 312–318, https://doi.org/10.1080/13803390701390483. [PubMed]
Bundesen, C., Habekost, T., & Kyllingsbæk, S. (2011). A neural theory of visual attention and short-term memory (NTVA). Neuropsychologia, 49(6), 1446–1457, https://doi.org/10.1016/j.neuropsychologia.2010.12.006. [PubMed]
Carrasco, M., & McElree, B. (2001). Covert attention accelerates the rate of visual information processing. Proceedings of the National Academy of Sciences, USA, 98(9), 5363–5367, https://doi.org/10.1073/pnas.081074098.
Carter, B. T., & Luke, S. G. (2018). Individuals’ eye movements in reading are highly consistent across time and trial. Journal of Experimental Psychology: Human Perception and Performance, 44(3), 482–492, https://doi.org/10.1037/xhp0000471. [PubMed]
Castelhano, M. S., Mack, M. L., & Henderson, J. M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3):6, 1–15, https://doi.org/10.1167/9.3.6. [PubMed]
Cicchetti, D. V., & Sparrow, S. A. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86, 127–137. [PubMed]
Clark, K., Birch-Hurst, K., Pennington, C. R., Petrie, A. C. P., Lee, J. T., & Hedge, C. (2022). Test-retest reliability for common tasks in vision science. Journal of Vision, 22(8):18, 1–18, https://doi.org/10.1167/jov.22.8.18.
Cohen, J. D. (2017). Cognitive control. In The Wiley handbook of cognitive control (pp. 1–28). New York: John Wiley & Sons, https://doi.org/10.1002/9781118920497.ch1.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302, https://doi.org/10.1037/h0040957. [PubMed]
Crowe, S. F. (1998). The differential contribution of mental tracking, cognitive flexibility, visual search, and motor speed to performance on parts A and B of the trail making test. Journal of Clinical Psychology, 54(5), 585–591, https://doi.org/10.1002/(SICI)1097-4679(199808)54:5<585::AID-JCLP4>3.0.CO;2-K. [PubMed]
Datta, D. (2017). blandr: A Bland-Altman method comparison package for R. Retrieved from https://github.com/deepankardatta/blandr.
Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64, 135–168, https://doi.org/10.1146/annurev-psych-113011-143750. [PubMed]
Dietze, N., & Poth, C. H. (2022). Phasic alertness is unaffected by the attentional set for orienting. Journal of Cognition, 5(1), 46, https://doi.org/10.5334/joc.242. [PubMed]
Dietze, N., Recker, L., & Poth, C. H. (2023). Warning signals only support the first action in a sequence. Cognitive Research: Principles and Implications, 8(1), 29, https://doi.org/10.1186/s41235-023-00484-z. [PubMed]
Doros, G., & Lew, R. (2010). Design based on intra-class correlation coefficients. Current Research in Biostatistics, 1(1), 1–8, https://doi.org/10.3844/amjbsp.2010.1.8.
Dorr, M., Martinetz, T., Gegenfurtner, K. R., & Barth, E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision, 10(10):28, 1–17, https://doi.org/10.1167/10.10.28.
Findlay, J. M., & Gilchrist, I. D. (2008). Active vision: The psychology of looking and seeing. Oxford: Oxford University Press, https://doi.org/10.1093/acprof:oso/9780198524793.001.0001.
Foerster, R. M. (2018). “Looking-at-nothing” during sequential sensorimotor actions: Long-term memory-based eye scanning of remembered target locations. Vision Research, 144, 29–37, https://doi.org/10.1016/j.visres.2018.01.005. [PubMed]
Foerster, R. M., Carbone, E., Koesling, H., & Schneider, W. X. (2011). Saccadic eye movements in a high-speed bimanual stacking task: Changes of attentional control during learning and automatization. Journal of Vision, 11(7):1, 1–16, https://doi.org/10.1167/11.7.1.
Foerster, R. M., Poth, C. H., Behler, C., Botsch, M., & Schneider, W. X. (2016). Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities. Scientific Reports, 6, 37016, https://doi.org/10.1038/srep37016. [PubMed]
Foerster, R. M., Poth, C. H., Behler, C., Botsch, M., & Schneider, W. X. (2019). Neuropsychological assessment of visual selective attention and processing capacity with head-mounted displays. Neuropsychology, 33, 309–318, https://doi.org/10.1037/neu0000517. [PubMed]
Foerster, R. M., & Schneider, W. X. (2015). Anticipatory eye movements in sensorimotor actions: On the role of guiding fixations during learning. Cognitive Processing, 16(1), 227–231, https://doi.org/10.1111/nyas.12729. [PubMed]
Gamer, M., Lemon, J., & Singh, I. F. P. (2019). irr: Various coefficients of interrater reliability and agreement. Vienna, Austria: R Foundation for Statistical Computing, https://CRAN.R-project.org/package=irr.
Gestefeld, B., Schneider, W. X., & Poth, C. H. (2023). Reliability of eye movement measures in pro-, anti- and memory-guided saccade tasks. Manuscript submitted for publication.
Goodhew, S. C., & Edwards, M. (2019). Translating experimental paradigms into individual-differences research: Contributions, challenges, and practical recommendations. Consciousness and Cognition, 69, 14–25, https://doi.org/10.1016/j.concog.2019.01.008. [PubMed]
Haghayegh, S., Kang, H.-A., Khoshnevis, S., Smolensky, M. H., & Diller, K. R. (2020). A comprehensive guideline for Bland–Altman and intra class correlation calculations to properly compare two methods of measurement and interpret findings. Physiological Measurement, 41(5), 055012, https://doi.org/10.1088/1361-6579/ab86d6. [PubMed]
Hayes, T. R., & Henderson, J. M. (2018). Scan patterns during scene viewing predict individual differences in clinical traits in a normative sample. PLoS One, 13(5), 1–17, https://doi.org/10.1371/journal.pone.0196654.
Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50(3), 1166–1186, https://doi.org/10.3758/s13428-017-0935-1. [PubMed]
Heitz, R. P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience, 8, 1–19, https://doi.org/10.3389/fnins.2014.00150. [PubMed]
Henderson, J. M., & Choi, W. (2015). Neural correlates of fixation duration during real-world scene viewing: Evidence from fixation-related (fire) fMRI. Journal of Cognitive Neuroscience, 27(6), 1137–1145, https://doi.org/10.1162/jocn_a_00769. [PubMed]
Henderson, J. M., & Luke, S. G. (2014). Stable individual differences in saccadic eye movements during reading, pseudoreading, scene viewing, and scene search. Journal of Experimental Psychology: Human Perception and Performance, 40(4), 1390–1400, https://doi.org/10.1037/a0036330. [PubMed]
Hicks, S. L., Sharma, R., Khan, A. N., Berna, C. M., Waldecker, A., Talbot, K., & Turner, M. R. (2013). An eye-tracking version of the trail-making test. PLoS One, 8(12), e84061, https://doi.org/10.1371/journal.pone.0084061. [PubMed]
Hutton, S. B. (2008). Cognitive control of saccadic eye movements. Brain and Cognition, 68(3), 327–340, https://doi.org/10.1016/j.bandc.2008.08.021. [PubMed]
Hwang, J., Brothers, R. M., Castelli, D. M., Glowacki, E. M., Chen, Y. T., Salinas, M. M., & Calvert, H. G. (2016). Acute high-intensity exercise-induced cognitive enhancement and brain-derived neurotrophic factor in young, healthy adults. Neuroscience Letters, 630, 247–253, https://doi.org/10.1016/j.neulet.2016.07.033. [PubMed]
Itti, L. (2015). New eye-tracking techniques may revolutionize mental health screening. Neuron, 88(3), 442–444, https://doi.org/10.1016/j.neuron.2015.10.033. [PubMed]
Klein, C., & Fischer, B. (2005). Instrumental and test–retest reliability of saccadic measures. Biological Psychology, 68(3), 201–213, https://doi.org/10.1016/j.biopsycho.2004.06.005. [PubMed]
Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163, https://doi.org/10.1016/j.jcm.2016.02.012. [PubMed]
Kopp, B., Rösser, N., Tabeling, S., Stürenburg, H. J., De Haan, B., Karnath, H. O., & Wessel, K. (2015). Errors on the trail making test are associated with right hemispheric frontal lobe damage in stroke patients. Behavioural Neurology, 2015, 309235, https://doi.org/10.1155/2015/309235. [PubMed]
Kortte, K. B., Horner, M. D., & Windham, W. (2002). The trail making test, part B: Cognitive flexibility or ability to maintain set? Applied Neuropsychology, 9(2), 106–109, https://doi.org/10.1207/S15324826AN0902_5.
Krauzlis, R. J., Goffart, L., & Hafed, Z. M. (2017). Neuronal control of fixation and fixational eye movements. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1718), 20160205, https://doi.org/10.1098/rstb.2016.0205.
Laere, E., Tee, S. F., & Tang, P. Y. (2018). Assessment of cognition in schizophrenia using trail making test: A meta-analysis. Psychiatry Investigation, 15(10), 945–955, https://doi.org/10.30773/pi.2018.07.22. [PubMed]
Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25–26), 3559–3565, https://doi.org/10.1016/S0042-6989(01)00102-X. [PubMed]
Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28(11), 1311–1328, https://doi.org/10.1068/p2935. [PubMed]
Land, M., & Tatler, B. (2009). Looking and acting: Vision and eye movements in natural behaviour. Oxford: Oxford Academic.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174, https://doi.org/10.2307/2529310. [PubMed]
Linari, I., Juantorena, G. E., Ibáñez, A., Petroni, A., & Kamienkowski, J. E. (2022). Unveiling Trail Making Test: Visual and manual trajectories indexing multiple executive processes. Scientific Reports, 12(1), 14265, https://doi.org/10.1038/s41598-022-16431-9. [PubMed]
Liversedge, S. P., & Findlay, J. M. (2000). Saccadic eye movements and cognition. Trends in Cognitive Sciences, 4(1), 6–14, https://doi.org/10.1016/S1364-6613(99)01418-7. [PubMed]
Luke, S. G., Darowski, E. S., & Gale, S. D. (2018). Predicting eye-movement characteristics across multiple tasks from working memory and executive control. Memory and Cognition, 46(5), 826–839, https://doi.org/10.3758/s13421-018-0798-4.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109(2), 163–203, https://doi.org/10.1037/0033-2909.109.2.163. [PubMed]
Martinez-Conde, S., Macknik, S. L., Troncoso, X. G., & Hubel, D. H. (2009). Microsaccades: A neurophysiological analysis. Trends in Neurosciences, 32(9), 463–475, https://doi.org/10.1016/j.tins.2009.05.006. [PubMed]
McCaffrey, R. J., Ortega, A., & Haase, R. F. (1993). Effects of repeated neuropsychological assessments. Archives of Clinical Neuropsychology, 8(6), 519–524, https://doi.org/10.1016/0887-6177(93)90052-3.
Mills, M., Hollingworth, A., Van der Stigchel, S., Hoffman, L., & Dodd, M. D. (2011). Examining the influence of task set on eye movements and fixations. Journal of Vision, 11(8):17, 1–15, https://doi.org/10.1167/11.8.17.
Mollon, J. D., Bosten, J. M., Peterzell, D. H., & Webster, M. A. (2017). Individual differences in visual science: What can be learned and what is good experimental practice? Vision Research, 141, 4–15, https://doi.org/10.1016/j.visres.2017.11.001. [PubMed]
Muir, R. T., Lam, B., Honjo, K., Harry, R. D., McNeely, A. A., Gao, F. Q., & Black, S. E. (2015). Trail making test elucidates neural substrates of specific poststroke executive dysfunctions. Stroke, 46(10), 2755–2761, https://doi.org/10.1161/STROKEAHA.115.009936. [PubMed]
O'Rourke, J. J. F., Beglinger, L. J., Smith, M. M., Mills, J., Moser, D. J., Rowe, K. C., & Paulsen, J. S. (2011). The Trail Making Test in prodromal Huntington disease: Contributions of disease progression to test performance. Journal of Clinical and Experimental Neuropsychology, 33(5), 567–579, https://doi.org/10.1080/13803395.2010.541228. [PubMed]
Paap, K. R., & Sawi, O. (2016). The role of test-retest reliability in measuring individual and group differences in executive functioning. Journal of Neuroscience Methods, 274, 81–93, https://doi.org/10.1016/j.jneumeth.2016.10.002. [PubMed]
Park, S.-Y., & Schott, N. (2021). The trail-making-test: Comparison between paper-and-pencil and computerized versions in young and healthy older adults. Applied Neuropsychology: Adult, 29(5), 1208–1220, https://doi.org/10.1080/23279095.2020.1864374. [PubMed]
Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., & Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203, https://doi.org/10.3758/s13428-018-01193-y. [PubMed]
Periáñez, J. A., Ríos-Lago, M., Rodríguez-Sánchez, J. M., Adrover-Roig, D., Sánchez-Cubillo, I., Crespo-Facorro, B., & Barceló, F. (2007). Trail Making Test in traumatic brain injury, schizophrenia, and normal ageing: Sample comparisons and normative data. Archives of Clinical Neuropsychology, 22(4), 433–447, https://doi.org/10.1016/j.acn.2007.01.022.
Poth, C. H., & Horstmann, G. (2017). Assessing the monitor warm-up time required before a psychological experiment can begin. The Quantitative Methods for Psychology, 13(3), 166–173, https://doi.org/10.20982/tqmp.13.3.p166.
Poynter, W., Barber, M., Inman, J., & Wiggins, C. (2013). Individuals exhibit idiosyncratic eye-movement behavior profiles across tasks. Vision Research, 89, 32–38, https://doi.org/10.1016/j.visres.2013.07.002. [PubMed]
R Core Team. (2019). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
Rae, B., Heathcote, A., Donkin, C., Averell, L., & Brown, S. (2014). The hare and the tortoise: Emphasizing speed can change the evidence used to make decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1226–1243, https://doi.org/10.1037/a0036801. [PubMed]
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422, https://doi.org/10.1037/0033-2909.124.3.372. [PubMed]
Rayner, K., Li, X., Williams, C. C., Cave, K. R., & Well, A. D. (2007). Eye movements during information processing tasks: Individual differences and cultural effects. Vision Research, 47(21), 2714–2726, https://doi.org/10.1016/j.visres.2007.05.007. [PubMed]
Recker, L., Foerster, R. M., Schneider, W. X., & Poth, C. H. (2022). Emphasizing speed or accuracy in an eye-tracking version of the Trail-Making-Test: Towards experimental diagnostics for decomposing executive functions. PLoS One, 17(9), e0274579, https://doi.org/10.1371/journal.pone.0274579. [PubMed]
Reitan, R. M. (1958). Validity of the Trail Making Test as an indicator of organic brain damage. Perceptual and Motor Skills, 8(3), 271–276, https://doi.org/10.2466/pms.1958.8.3.271.
Salthouse, T. A. (2011). What cognitive abilities are involved in trail-making performance? Intelligence, 39(4), 222–232, https://doi.org/10.1016/j.intell.2011.03.001. [PubMed]
Salthouse, T. A., & Ellis, C. L. (1980). Determinants of eye-fixation duration. The American Journal of Psychology, 93(2), 207–234, https://doi.org/10.2307/1422228. [PubMed]
Sánchez-Cubillo, I., Periáñez, J. A., Adrover-Roig, D., Rodríguez-Sánchez, J. M., Ríos-Lago, M., Tirapu, J., & Barceló, F. (2009). Construct validity of the Trail Making Test: Role of task-switching, working memory, inhibition/interference control, and visuomotor abilities. Journal of the International Neuropsychological Society, 15(3), 438–450, https://doi.org/10.1017/S1355617709090626.
Schall, J. D., & Thompson, K. G. (1999). Neural selection and control of visually guided eye movements. Annual Review of Neuroscience, 22(1), 241–259, https://doi.org/10.1146/annurev.neuro.22.1.241. [PubMed]
Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Eye movements and perception: A selective review. Journal of Vision, 11(5):9, 1–30, https://doi.org/10.1167/11.5.9.
Staub, A. (2021). How reliable are individual differences in eye movements in reading? Journal of Memory and Language, 116, 104190, https://doi.org/10.1016/j.jml.2020.104190.
Varjacic, A., Mantini, D., Demeyere, N., & Gillebert, C. R. (2018). Neural signatures of Trail Making Test performance: Evidence from lesion-mapping and neuroimaging studies. Neuropsychologia, 115, 78–87, https://doi.org/10.1016/j.neuropsychologia.2018.03.031. [PubMed]
Wickelgren, W. A. (1977). Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica, 41(1), 67–85, https://doi.org/10.1016/0001-6918(77)90012-9.
Wilmer, J. B. (2008). How to use individual differences to isolate functional organization, biology, and utility of visual functions; with illustrative proposals for stereopsis. Spatial Vision, 21(6), 561–579, https://doi.org/10.1163/156856808786451408. [PubMed]
Wilmer, J. B. (2017). Individual differences in face recognition: A decade of discovery. Current Directions in Psychological Science, 26(3), 225–230, https://doi.org/10.1177/0963721417710693.
Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 28(4), 1060–1092, https://doi.org/10.3758/s13423-020-01859-9. [PubMed]
Wölwer, W., & Gaebel, W. (2002). Impaired Trail-Making Test-B performance in patients with acute schizophrenia is related to inefficient sequencing of planning and acting. Journal of Psychiatric Research, 36(6), 407–416, https://doi.org/10.1016/S0022-3956(02)00050-X. [PubMed]
Figure 1.
 
Example stimulus displays for TMT-A (A) and TMT-B (B).
Figure 1.
 
Example stimulus displays for TMT-A (A) and TMT-B (B).
Figure 2.
 
Illustrative example of chosen reliability metrics. The left panel shows an approximate depiction (i.e., dashed line) of an intraclass correlation representing the relative agreement between two sessions. The right panel shows a Bland–Altman plot including its descriptive metrics representing the absolute agreement between sessions. The bias is the mean difference between sessions across all participants. The limits of agreement (LOAs) mark the values that contain 95% of the individual differences between sessions. The weight of the regression (b1 highlighted in red) from difference between sessions ∼ mean completion time can uncover systematic differences arising between sessions. Gray areas represent 95% CIs. The illustrated data represent the accuracy condition in TMT-A of sample A.
Figure 2.
 
Illustrative example of chosen reliability metrics. The left panel shows an approximate depiction (i.e., dashed line) of an intraclass correlation representing the relative agreement between two sessions. The right panel shows a Bland–Altman plot including its descriptive metrics representing the absolute agreement between sessions. The bias is the mean difference between sessions across all participants. The limits of agreement (LOAs) mark the values that contain 95% of the individual differences between sessions. The weight of the regression (b1 highlighted in red) from difference between sessions ∼ mean completion time can uncover systematic differences arising between sessions. Gray areas represent 95% CIs. The illustrated data represent the accuracy condition in TMT-A of sample A.
Figure 3.
 
Intraclass correlations for dependent variables examined in the TMT. Results for the intraclass correlations for each examined score in the TMT after 3 days (sample A) and 10 to 30 days (sample B). Circles indicate results for TMT-A, triangles indicate results for TMT-B, blue indicates speed instructions, red indicates accuracy instructions, and the vertically written and color-delimited classifications are according to Landis & Koch (1977).
Figure 3.
 
Intraclass correlations for dependent variables examined in the TMT. Results for the intraclass correlations for each examined score in the TMT after 3 days (sample A) and 10 to 30 days (sample B). Circles indicate results for TMT-A, triangles indicate results for TMT-B, blue indicates speed instructions, red indicates accuracy instructions, and the vertically written and color-delimited classifications are according to Landis & Koch (1977).
Table 1.
 
Sample A results on test–retest reliabilities of scores in the computerized TMT. Sessions were 3 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 1.
 
Sample A results on test–retest reliabilities of scores in the computerized TMT. Sessions were 3 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 2.
 
Sample B results for test–retest reliabilities of scores in the computerized TMT. Sessions were 10 to 30 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 2.
 
Sample B results for test–retest reliabilities of scores in the computerized TMT. Sessions were 10 to 30 days apart. Results for the Bland–Altman analyses including the bias, the limits of agreement (LOAs; i.e., bias ± 1.96 × standard deviation) and the slope (b1), and the ICCs based on a two-way, mixed-effects model for average agreement (ICC[A,2]) are given for each investigated test score. For biases and ICCs, 95% CIs are given in squared brackets. For LOAs, the standard errors of the LOAs are given in parentheses. Slopes printed in bold indicate significant weights (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 3.
 
Precision estimations for given sample sizes. Based on the range of 95% CIs for three assumed values of ICCs we should be able to find, given the sample sizes of sample A (n = 30) and sample B (n = 33).
Table 3.
 
Precision estimations for given sample sizes. Based on the range of 95% CIs for three assumed values of ICCs we should be able to find, given the sample sizes of sample A (n = 30) and sample B (n = 33).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×