Abstract
Medical image perception research is clearly important, but it is difficult for researchers to use authentic medical images as stimuli in a controlled manner. On the one hand, public medical image datasets are relatively uncommon, often incomplete, and the data processing and labeling required for real images can be prohibitively time-consuming. On the other hand, it is hard to find medical images which have the desired experimental attributes (e.g., lesion types, locations, etc.). Therefore, the stimuli that are used for medical perception experiments are often highly artificial. While these stimuli are easily generated and manipulated, they are routinely critiqued for being obviously unrealistic. Thus, generating authentic looking (i.e., metameric) medical stimuli is important for medical image perception research. Here, we used the Generative Adversarial Network (GAN) to create perceptually authentic medical images. For different image modalities (e.g., MRI, CT, etc), the generator of the GAN was trained to approximate the realistic image manifold, given modality-specific training data. We used a variety of publicly available medical image datasets for training, including DDSM, DeepLesion, and fastMRI. Novel (fake) radiographs were synthesized by sampling from the learned image manifold. Our method was capable of manipulating the stimuli to match desired experimental attributes, such as texture and shape. We generated desired radiographs that included torso, limbs, and chest. Untrained observers and expert radiologists then completed a psychophysical experiment which required them to distinguish the real from fake (generated) radiographs. The resulting ROC analysis revealed consistent but near chance performance, indicating that observers attended to the task but could not reliably distinguish the real radiographs from our generated ones. The method, therefore, provides a means of creating realistic stimuli for medical image perception experiments.