August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
The Maximum Differentiation competition depends on the Viewing Conditions
Author Affiliations
  • Jesús Malo
    Image Processing Lab. Universitat de Valencia. Valencia. Spain
  • David Kane
    Dept. Inf. Comm. Technologies. Universitat Pompeu Fabra. Barcelona. Spain
  • Marcelo Bertalméo
    Dept. Inf. Comm. Technologies. Universitat Pompeu Fabra. Barcelona. Spain
Journal of Vision September 2016, Vol.16, 822. doi:https://doi.org/10.1167/16.12.822
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jesús Malo, David Kane, Marcelo Bertalméo; The Maximum Differentiation competition depends on the Viewing Conditions. Journal of Vision 2016;16(12):822. https://doi.org/10.1167/16.12.822.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The optimization or falsification of vision science models can require time-consuming experimentation. This is especially true for models of artifact detection that require large databases of thresholds judgements or subjective image quality scores. Wang&Simoncelli (JoV, 2005, 2008) proposed a novel psychophysical method to avoid such experimental burden: the MAximum Differentiation (MAD) competition. This technique computes a pair of maximally different images according to each vision model under investigation, and the subject then selects the pair of images that they perceive to have greater difference. This paradigm is able to reduce the falsification of competing models to one experiment. As a result, MAD has been used to simplify the optimization of divisive-normalization contrast perception models (Malo&Simoncelli, SPIE 2015). The MAD paradigm is proposed in a context-independent manner and used on complex, unconstrained datasets. However, as a proof-of-concept, we demonstrate that the MAD paradigm can produce contradictory results in different surround conditions: these computational examples (based on luminance adaptation and the associated crispening effect, see supplementary material) show that the decision between models cannot be reduced to a single image comparison. On the contrary, it is mandatory to extend MAD, either by (1) doing a number of surround-dependent comparisons with the same images, which would reduce the conceptual advantage of MAD, or by (2) including the effects of the surround in the models considered in the MAD competition, which would give surround-dependent image pairs. REFERENCES Wang Z. & Simoncelli E.P. (2005) MAD competition: comparing quantitative models of perceptual discriminability. VSS Abstract. J. Vision, 5(8): 230-230. Wang Z. & Simoncelli E. P. (2008). MAximum Differentiation competition: A methodology for comparing computational models of perceptual quantities. J. Vision, 8(12): 8, 1–13 Malo J. & Simoncelli E. P. (2015). Geometrical and statistical properties of vision models obtained via MAximum Differentiation. Proc. SPIE, Human Vision Electr. Imag. Vol. 9394

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×