Abstract
We investigated how AI explanations help primary eye care providers differentiate between immediate and non-urgent referrals for glaucoma surgical care. We developed explainable AI algorithms to predict glaucoma surgery needs from routine eye care data to identify high-risk patients. We included intrinsic and post-hoc explainability and conducted an online study with optometrists to assess human-AI team performance, measuring referral accuracy, interaction with AI, agreement rates, task time, and user experience perceptions. AI support improved referral accuracy among 87 participants (59.9% with AI vs. 50.8% without), though Human-AI teams underperformed compared to AI alone - on a separate test set, our black-box and intrinsic models achieved 77% and 71% accuracy, respectively, in predicting surgical outcomes. Participants felt they used AI advice more with the intrinsic model, finding it more useful and promising. Without explanations, deviations from AI recommendations increased. AI support did not increase workload, confidence, and trust but reduced challenges.We identify opportunities for human-AI teaming in glaucoma management, noting that AI enhances referral accuracy but shows a performance gap compared to AI alone, even with explanations. Becasue, human involvement remains crucial in medical decision-making, highlighting the need for future research to optimize collaboration, ensuring positive experiences and safe AI use.
Funding: 5 K23 EY032204-04; Unrestricted grant from Research to Prevent Blindness; Brightfocus National Glaucoma Research Grant 825150, ARVO Epstein Award