September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Training a Convolutional Neural Network to Detect the Gist of Breast Cancer
Author Affiliations
  • Gaeun Kim
    Stanford University Online High School
  • Arkadiusz Sitek
    Philips Research
  • Jian Chen
    Department of Computer Science and Engineering, The Ohio State University
  • Karla Evans
    Department of Psychology, University of York
  • Jeremy Wolfe
    Harvard University and Brigham & Women's Hospital
Journal of Vision September 2018, Vol.18, 518. doi:https://doi.org/10.1167/18.10.518
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gaeun Kim, Arkadiusz Sitek, Jian Chen, Karla Evans, Jeremy Wolfe; Training a Convolutional Neural Network to Detect the Gist of Breast Cancer. Journal of Vision 2018;18(10):518. https://doi.org/10.1167/18.10.518.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies show that radiologists can discriminate normal from abnormal mammograms after just 250-2000 ms of observation. Interestingly, radiologists can still discriminate normal from abnormal when the abnormal breast is the breast contralateral to the lesion. It is not clear which features/patterns of the images are responsible for successful extraction of this "gist" impression. In an effort to better understand this signal of abnormality, we have developed a convolutional neural network (CNN) model to perform the same task. This model is constructed in three steps. First, VGG-19, an established CNN, is pre-trained on non-medical images, using the imageNet database. This training makes the CNN analogous to a naïve observer, able to categorize objects but uninformed about mammography. Next, we feed full-field mammograms through the network to obtain 4096-dimensional feature vectors which are abstract representations of the original mammograms. Finally, we perform normal/abnormal classifications on mammograms using features obtained in previous step, using a supervised machine-learning algorithm. We fine-tune its parameters by performing an exhaustive grid search on the types of kernels and cost values. In a way the last step is similar to sending radiologists to medical school to teach them how to interpret the visual information represented by abstract features. The CNN produced AUC values of 0.74, comparable to our human observers. Human and computational assessments of gist are correlated (r=0.65). However, since they are not perfectly correlated, it is possible to combine human and CNN assessments of abnormality to produce a joint assessment that is better than either humans or CNN alone. The signal is not well correlated with breast density. These results show that there are global signals of abnormality that can be detected by an appropriately trained CNN. It is possible that such signals could serve as "imaging risk factors" in breast cancer screening.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×