October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
A Generative Model for Tumor Stimuli Synthesis
Author Affiliations
  • Zhihang Ren
    University of California, Berkeley
  • Tsung-Wei Ke
    University of California, Berkeley
  • Stella X. Yu
    University of California, Berkeley
  • David Whitney
    University of California, Berkeley
Journal of Vision October 2020, Vol.20, 1712. doi:https://doi.org/10.1167/jov.20.11.1712
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhihang Ren, Tsung-Wei Ke, Stella X. Yu, David Whitney; A Generative Model for Tumor Stimuli Synthesis. Journal of Vision 2020;20(11):1712. doi: https://doi.org/10.1167/jov.20.11.1712.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent studies have shown that previous visual stimuli can affect current visual perception. It is believed that such serial dependency can help to increase perceptual stability since our visual world tends to be stable over space and time. However, when radiologists review mammographies in a sequence, their visual world does not necessarily have the assumed stability due to variations in the patients, scanners, and tumor types. Serial dependency may thus strongly influence radiologists' decisions and diagnoses. Understanding the mechanism could potentially lead to new strategies that prevent radiologists from making biased decisions. In order to study the role of serial dependency in radiography recognition, we need to be able to generate visually related stimuli in a sequence. Synthetic tumor stimuli are typically generated by applying simple spatial deformation and intensity filtering operations such as blurring in masked areas. However, synthetic scans from such image manipulations often appear implausible to a radiologist, because they are not metamers for real tumors and are often anatomically inconsistent with the surrounding tissues. Our goal is to synthesize realistic new tumor images from a small set of real scans. We leverage recent advances in deep learning to generate synthetic mammographies that conform to the statistical pattern distributions exhibited in the real scans. We build such a generative model based on Digital Database for Screening Mammography (DDSM) dataset, which has 2,620 cases of normal and tumor scans. Our model can synthesize new scans that have tumors similar to source images, seamlessly embedded into the target background image. We are exploring additional Generative Adversarial Network (GAN) models that produce high-resolution synthetic scans with realistic variations in both foreground tumor regions and surrounding tissues.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×