September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
A General Model for Medical Stimuli Synthesis
Author Affiliations
  • Zhihang Ren
    University of California, Berkeley
  • Min Zhou
    The First People's Hospital of Shuangliu District,Chengdu
  • Stella X. Yu
    University of California, Berkeley
  • David Whitney
    University of California, Berkeley
Journal of Vision September 2021, Vol.21, 2050. doi:https://doi.org/10.1167/jov.21.9.2050
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhihang Ren, Min Zhou, Stella X. Yu, David Whitney; A General Model for Medical Stimuli Synthesis. Journal of Vision 2021;21(9):2050. https://doi.org/10.1167/jov.21.9.2050.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Medical image perception research is clearly important, but it is difficult for researchers to use authentic medical images as stimuli in a controlled manner. On the one hand, public medical image datasets are relatively uncommon, often incomplete, and the data processing and labeling required for real images can be prohibitively time-consuming. On the other hand, it is hard to find medical images which have the desired experimental attributes (e.g., lesion types, locations, etc.). Therefore, the stimuli that are used for medical perception experiments are often highly artificial. While these stimuli are easily generated and manipulated, they are routinely critiqued for being obviously unrealistic. Thus, generating authentic looking (i.e., metameric) medical stimuli is important for medical image perception research. Here, we used the Generative Adversarial Network (GAN) to create perceptually authentic medical images. For different image modalities (e.g., MRI, CT, etc), the generator of the GAN was trained to approximate the realistic image manifold, given modality-specific training data. We used a variety of publicly available medical image datasets for training, including DDSM, DeepLesion, and fastMRI. Novel (fake) radiographs were synthesized by sampling from the learned image manifold. Our method was capable of manipulating the stimuli to match desired experimental attributes, such as texture and shape. We generated desired radiographs that included torso, limbs, and chest. Untrained observers and expert radiologists then completed a psychophysical experiment which required them to distinguish the real from fake (generated) radiographs. The resulting ROC analysis revealed consistent but near chance performance, indicating that observers attended to the task but could not reliably distinguish the real radiographs from our generated ones. The method, therefore, provides a means of creating realistic stimuli for medical image perception experiments.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×