Abstract
Robust processes have been established to review and approve new treatments options within ophthalmic care, whether pharmaceuticals or medical devices. These processes are designed to ensure that any new medical product is safe, effective, and well-understood in terms of its functionality. In contrast, the rapid evolution of artificial intelligence (AI) has been heralded as a game-changer in healthcare, promising to transform patient care, doctor-patient interactions, and back-office functions. While AI's potential has been demonstrated by numerous groups worldwide, applying the same rigorous standards used for medical product approval reveals several areas where AI must improve. In this talk, I will focus on our group’s efforts to develop safe and robust AI models for various ophthalmology functions. Specifically, we will explore how using uncertainty can determine data value and enhance model robustness, both within individual models and against adversarial attacks. These strategies will be applied to tasks such as segmentation, classification, and object detection. One of the significant challenges in developing medical AI is dealing with imbalanced data, especially when identifying small objects. We will also review cutting-edge attention-based network features that can be developed to address these challenges and leverage known structures within retinal anatomy, particularly in object detection tasks.
Funding: This research was supported by the NIHR Moorfields Biomedical Research Centre. This work was supported by National Institutes of Health Core Grant (EY014800), and an Unrestricted Grant from Research to Prevent Blindness, New York, NY, to the Department of Ophthalmology & Visual Sciences, University of Utah