August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Plenoptic: A platform for synthesizing model-optimized visual stimuli
Author Affiliations
  • Lyndon Duong
    New York University
  • Kathryn Bonnen
    Indiana University
  • William Broderick
    Flatiron Institute
  • Pierre-Étienne Fiquet
    New York University
  • Nikhil Parthasarathy
    New York University
  • Thomas Yerxa
    New York University
  • Xinyuan Zhao
    New York University
  • Eero Simoncelli
    New York University
    Flatiron Institute
Journal of Vision August 2023, Vol.23, 5822. doi:https://doi.org/10.1167/jov.23.9.5822
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lyndon Duong, Kathryn Bonnen, William Broderick, Pierre-Étienne Fiquet, Nikhil Parthasarathy, Thomas Yerxa, Xinyuan Zhao, Eero Simoncelli; Plenoptic: A platform for synthesizing model-optimized visual stimuli. Journal of Vision 2023;23(9):5822. https://doi.org/10.1167/jov.23.9.5822.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In sensory perception and neuroscience, new computational models are most often tested and compared in terms of their ability to fit existing data sets. However, experimental data are inherently limited in size, quality, and type, and complex models often saturate their explainable variance. Moreover, it is often difficult to use models to guide the development of future experiments. Here, building on ideas for optimal experimental stimulus selection (e.g., QUEST, Watson and Pelli, 1983), we present "Plenoptic", a python software library for generating visual stimuli optimized for testing or comparing models. Plenoptic provides a unified framework containing four previously-published synthesis methods -- model metamers (Freeman and Simoncelli, 2011), Maximum Differentiation (MAD) competition (Wang and Simoncelli, 2008), eigen-distortions (Berardino et al. 2017), and representational geodesics (Hénaff and Simoncelli, 2015) -- each of which offers visualization of model representations, and generation of images that can be used to experimentally test alignment with the human visual system. Plenoptic leverages modern machine-learning methods to enable application of these synthesis methods to any computational model that satisfies a small set of common requirements. The most important of these is that the model must be image-computable, implemented in PyTorch, and end-to-end differentiable. The package includes examples of several low- and mid-level visual models, as well as a set of perceptual quality metrics. Plenoptic is open source, tested, documented, and extensible, allowing the broader research community to contribute new examples and methods. In summary, Plenoptic leverages machine learning tools to tighten the scientific hypothesis-testing loop, facilitating investigation of human visual representations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×