Abstract
It is expected that as observers learn a new interface (e.g., medical image system), they respond more quickly over time. However, it is unclear how learning rates are affected by expertise in breast cancer detection. 243 participants (162 novices; 81 experts with breast imaging experience), searched 2D (Experiment 1 n = 60, 48% experts; Experiment 3 n = 45%) and 3D displays (Experiments 2 n = 102, 31% experts; and Experiment 4 n = 70; 21% experts), consisting of letter “T” targets amongst letter “L” distractors (Experiments 1 and 2), and search for up to two masses and /or calcifications in simulated (OpenVCT framework) breast anatomy (Experiments 2 and 4). This study leverages mixed-effect modeling to account for inter-observer variability (i.e., random intercept per participant), while simultaneously exploring effects of search time in study (i.e., practice, number of targets, stimuli complexity, 2D vs 3D). A model containing fixed effects for trial number, number of targets, 2D and/or 3D display, and whether the participant has imaging experience was the best performing model. First, a substantial amount of variation in quit time is explained by inter-observer differences. Participants responded significantly more quickly for each trial (β = -0.16, p < .001), for each unit increase in number of abnormalities (β = -7.59, p < .001), if they had breast imaging experience (β= -3.22, p < .001), though performed more slowly in 3D displays (β= 7.14, p < .001). Experience matters – in terms of individual differences but also within the experiment – and these distinct but related sources of variation should be carefully considered. Given that substantial variation was explained by between person-differences in quit time (i.e., participant intercepts), future work may evaluate what predicts these differences, beyond expertise class alone (e.g., years experience, conscientiousness, level of fatigue, time of day, etc.).