September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Self-supervised models of human texture-discrimination
Author Affiliations
  • Abhranil Das
    University of Texas at Austin
  • Wilson Geisler
    University of Texas at Austin
Journal of Vision September 2024, Vol.24, 901. doi:https://doi.org/10.1167/jov.24.10.901
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Abhranil Das, Wilson Geisler; Self-supervised models of human texture-discrimination. Journal of Vision 2024;24(10):901. https://doi.org/10.1167/jov.24.10.901.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Our ability to interpret scenes builds upon segmenting images into same-texture regions, which are usually the same physical ‘stuff’. How do we do this so immediately on arbitrary new images with novel textures, under varying lighting and geometry, with a visual system whose resolution degrades rapidly away from fixation? We conduct the first direct measure of human ability to identify if two 1°×1° grayscale texture patches (widely sampled to prevent learning specific textures) are the same ‘stuff’, when presented at the same or two different locations, which include the fovea and three eccentricities. We also develop the first models for such general discrimination, using two image-computable approaches that can incorporate biological properties like eye optics and the ganglion-cell sampling resolution at the stimulus location. In the first model, we define important texture features like the luminance histogram, power spectrum, and edge properties. Then we measure and model the statistical distribution of these features across textures, using which we build a Bayesian ideal observer for same-different discrimination. With a single fixed decision boundary on only two features, this discriminates arbitrary texture patches with 93% accuracy, and aligns qualitatively with human performance. The second class of models are convolutional neural networks that can discover new features, using which we also achieve good discrimination. Biological vision does not need numerous labelled samples to learn texture discrimination. Instead, it is likely self-supervised: primitive vision plausibly used coarse features to segment images, then evolution boot-strapped these labels to learn texture discrimination. We implemented this by training the Bayesian texture-discriminator on natural image patches segmented by proximity and color, and the same decision boundary emerged as with explicit texture labels.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×