Abstract
The optimization or falsification of vision science models can require time-consuming experimentation. This is especially true for models of artifact detection that require large databases of thresholds judgements or subjective image quality scores. Wang&Simoncelli (JoV, 2005, 2008) proposed a novel psychophysical method to avoid such experimental burden: the MAximum Differentiation (MAD) competition. This technique computes a pair of maximally different images according to each vision model under investigation, and the subject then selects the pair of images that they perceive to have greater difference. This paradigm is able to reduce the falsification of competing models to one experiment. As a result, MAD has been used to simplify the optimization of divisive-normalization contrast perception models (Malo&Simoncelli, SPIE 2015). The MAD paradigm is proposed in a context-independent manner and used on complex, unconstrained datasets. However, as a proof-of-concept, we demonstrate that the MAD paradigm can produce contradictory results in different surround conditions: these computational examples (based on luminance adaptation and the associated crispening effect, see supplementary material) show that the decision between models cannot be reduced to a single image comparison. On the contrary, it is mandatory to extend MAD, either by (1) doing a number of surround-dependent comparisons with the same images, which would reduce the conceptual advantage of MAD, or by (2) including the effects of the surround in the models considered in the MAD competition, which would give surround-dependent image pairs. REFERENCES Wang Z. & Simoncelli E.P. (2005) MAD competition: comparing quantitative models of perceptual discriminability. VSS Abstract. J. Vision, 5(8): 230-230. Wang Z. & Simoncelli E. P. (2008). MAximum Differentiation competition: A methodology for comparing computational models of perceptual quantities. J. Vision, 8(12): 8, 1–13 Malo J. & Simoncelli E. P. (2015). Geometrical and statistical properties of vision models obtained via MAximum Differentiation. Proc. SPIE, Human Vision Electr. Imag. Vol. 9394
Meeting abstract presented at VSS 2016