September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Using machine learning to understand human sensitivity to noise in naturalistic images
Author Affiliations
  • Bhavin Sheth
    University of Houston
  • Maxwell Johnson
    University of Houston
  • Markus Potthast
    University of Houston
Journal of Vision September 2021, Vol.21, 2459. doi:https://doi.org/10.1167/jov.21.9.2459
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bhavin Sheth, Maxwell Johnson, Markus Potthast; Using machine learning to understand human sensitivity to noise in naturalistic images. Journal of Vision 2021;21(9):2459. https://doi.org/10.1167/jov.21.9.2459.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Understanding our sensitivity to noise in naturalistic conditions will provide key clues as to the visual system's adaptations to the real world. Determining which characteristics of image noise (kind, amount, or distribution) appear particularly noisy to humans is a key first step. To examine this, we presented pairs of images (natural landscapes: m=102 shot at ground level; m=289 overhead), which were noise-added versions of the same noise-free image and had observers (n=108 total) judge, in a 2AFC paradigm, which image appeared clearer (less noisy). Varying level and kind of noise, data from 2346 image pairs were obtained. Classifiers were trained to simulate human choice data based on the values extracted for each of the image's forty-four pre-determined features. Features are local or global; local meaning over a small neighborhood around each pixel; global meaning ensemble values over the entire image. Results shown here are the averages of four algorithms (decision tree, random forest, logistic regression, SVM). Ten-fold cross-validation was used. Custom code for feature extraction was written in MATLAB; supervised machine learning was implemented on Pycharm. Our reasoning was to use machine learning to mimic human selection, then leverage the classifier to find what image features could possibly underlie human sensitivity to noise. While the all-features classifier matched (95.3±1.2%) human choice data, its performance was nearly matched by a classifier (93.0±1.3%) based on a single local feature, LocalContrastNearestNeighbor-Red – the local (red) contrast between a pixel and its nearest neighbors. The second-best (92.4±1.4%) single-feature classifier was also local. On the other hand, the best single global-feature classifiers, namely NumberOfOutlines (88.0±1.6%), HistogramMean-Red (83.3±1.9%) and HistogramSkewness-Red (82.8±1.9%) did not perform as well. Generally speaking, clearer images as judged by human observers were of lower local contrast and higher positive skewness than their noisier counterparts. Additional signatures of human-like performance are being explored.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×