Abstract
Radiology professionals and trainees are often classified into broad categories of expertise that are inferred from their title, level of training (i.e., intern, resident, attending, specialist), and/or years of experience—rather than by any objective metric of performance. These categories dramatically oversimplify reality, overlooking individual differences, exceptional skills or natural talent, and potential age-related declines in sensitivity. Rank is usually tied to years on the job, and individuals typically move up but not down the ladder, even if their skills diminish over time. A more principled measure of perceptual expertise would provide the basis to optimize training assessments by allowing medical education programs to directly test whether trainees are behaving like experts and—if not—what specific behaviors are not yet at a desired level of performance. We conducted a psychophysical and eye-tracking study aimed at quantifying the gaze dynamics used by professional radiologists to detect abnormalities in medical images. Naive individuals with no medical imaging experience (n=9), radiology residents (n=11), and attending radiologists (n=6) searched through chest X-rays, each of which contained one abnormality (a potentially cancerous nodule). Consistent with prior work, we found that some observers performed better than expected based on rank and years of experience. In fact, some residents early in their training outperformed attending radiologists, despite an extensive experience differential. These results highlight that, at best, experience is an uncertain predictor of expertise level, and at worse, it reflects little more than seniority. We therefore propose that individuals should instead be grouped based on their objectively measured performance in specific tasks. There is a need for the radiology field to move beyond the standard rank descriptions of attendings versus trainees, and to develop more efficient strategies and methods to quantify expertise.