Abstract
Visual cortical implants are a promising neurotechnology designed to provide blind individuals with a basic form of visual perception, known as "phosphene vision." This perception occurs when implanted microelectrode arrays electrically stimulate specific areas of the brain, such as the primary visual cortex, resulting in the experience of phosphenes. In my lab, we focus on developing algorithms that process visual information to enable meaningful scene representation within the technological and biological constraints of these implants. We evaluate our algorithms through biologically plausible simulations with sighted adults, aiming to approximate the visual experiences of potential implant users. In this talk, I will present our latest findings, highlighting the significance of user-centred design in neurotechnology, with a particular emphasis on visual cortical implants. Additionally, I will demonstrate how a dynamic, gaze-controlled semantic segmentation approach for scene representation can enhance object recognition in phosphene vision.