September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Learning From Paintings Improves Representations for Fabric Recognition
Author Affiliations & Notes
  • Hubert Lin
    Cornell University
  • Mitchell Van Zuijlen
    Delft University of Technology
  • Maarten W.A. Wijntjes
    Delft University of Technology
  • Sylvia C. Pont
    Delft University of Technology
  • Kavita Bala
    Cornell University
  • Footnotes
    Acknowledgements  This work was funded in part by NSF (CHS-1617861 and CHS-1513967), NSERC (PGS-D 516803 2018), and the Netherlands Organization for Scientific Research (NWO) project 276-54-001.
Journal of Vision September 2021, Vol.21, 2185. doi:https://doi.org/10.1167/jov.21.9.2185
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hubert Lin, Mitchell Van Zuijlen, Maarten W.A. Wijntjes, Sylvia C. Pont, Kavita Bala; Learning From Paintings Improves Representations for Fabric Recognition. Journal of Vision 2021;21(9):2185. https://doi.org/10.1167/jov.21.9.2185.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Material classification is the task of distinguishing materials like fabric, wood, ceramic, and so forth. Fine-grained classification aims to distinguish subcategories of a material (e.g., satin fabric versus wool fabric). Fine-grained recognition relies on identifying specific visual attributes (e.g., satin is glossy while wool is not) over contextual cues (e.g., both satin and wool are used as clothing). In paintings, artists carefully place visual cues like highlights on fabrics like silk or satin; as such, we hypothesize that learning a visual recognition model from paintings can be beneficial. In this study, we explored the representations learned by neural networks for the task of distinguishing silk/satin from cotton/wool. We trained separate models on paintings from the Materials In Paintings (MIP) dataset and on photos from Flickr, and extracted evidence heatmaps that indicate which cues are used by the models in unseen test images. We conducted a study with 57 quality-controlled participants on MTurk to analyze which model uses cues that are preferred by humans. Overall, we found the model trained on paintings utilizes cues that are better preferred by humans. Furthermore, we found that both models use cues that are equally preferred when tested on photos of silk/satin. This is interesting as the model trained on paintings has never been explicitly trained to classify photos of satin/silk, and conventional wisdom dictates that the painting model should fail to extract cues that are comparable to the cues used by the photo model in this setting. Furthermore, the painting model generalizes better across domains with respect to prediction accuracy. While this study is limited to two fine-grained classes of fabric, our results show that perceptual depictions in paintings can be useful for guiding deep neural networks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×