Abstract
Recent studies have shown that previous visual stimuli can affect current visual perception. It is believed that such serial dependency can help to increase perceptual stability since our visual world tends to be stable over space and time. However, when radiologists review mammographies in a sequence, their visual world does not necessarily have the assumed stability due to variations in the patients, scanners, and tumor types. Serial dependency may thus strongly influence radiologists' decisions and diagnoses. Understanding the mechanism could potentially lead to new strategies that prevent radiologists from making biased decisions.
In order to study the role of serial dependency in radiography recognition, we need to be able to generate visually related stimuli in a sequence. Synthetic tumor stimuli are typically generated by applying simple spatial deformation and intensity filtering operations such as blurring in masked areas. However, synthetic scans from such image manipulations often appear implausible to a radiologist, because they are not metamers for real tumors and are often anatomically inconsistent with the surrounding tissues.
Our goal is to synthesize realistic new tumor images from a small set of real scans. We leverage recent advances in deep learning to generate synthetic mammographies that conform to the statistical pattern distributions exhibited in the real scans.
We build such a generative model based on Digital Database for Screening Mammography (DDSM) dataset, which has 2,620 cases of normal and tumor scans. Our model can synthesize new scans that have tumors similar to source images, seamlessly embedded into the target background image. We are exploring additional Generative Adversarial Network (GAN) models that produce high-resolution synthetic scans with realistic variations in both foreground tumor regions and surrounding tissues.