Abstract
A key step in vision research is comparison of experimental data to models intended to predict the data. Until recently, limitations on computer power and lack of availability of appropriate software meant that the researcher's tool kit was limited to a few generic techniques such as fitting individual psychometric functions. Use of these models entails assumptions such as the exact form of the psychometric function that are rarely tested. It is not always obvious how to compare competing models, to show that one describes the data better than another or to estimate what percentage of ‘variability’ in the responses of the observers is really captured by the model. Limitations on the models that researchers are able to fit translate into limitations on the questions they can ask and, ultimately, the perceptual phenomena that can be understood. Because of recent advances in statistical algorithms and the increased computer power available to all researchers, it is now possible to make use of a wide range of computer-intensive parametric and nonparametric approaches based on modern statistical methods. These approaches allow the experimenter to make more efficient use of perceptual data, to fit a wider range of perceptual data, to avoid unwarranted assumptions, and potentially to consider more complex experimental designs with the assurance that the resulting data can be analyzed. Researchers are likely familiar with nonparametric resampling methods such as bootstrapping (Efron, 1979; Efron & Tibshirani, 1993). We review a wider range of recent developments in statistics in the past twenty years including results from the machine learning and model selection literatures. Knoblauch introduces the symposium and describes how a wide range of psychophysical procedures (including fitting psychophysical functions, estimating classification images, and estimating the parameters of signal detection theory) share a common mathematical structure that can be readily addressed by modern statistical approaches. He also shows how to extend these methods to model more complex experimental designs and also discusses modern approaches to smoothing data. Foster describes how to relax the typical assumptions made in fitting psychometric functions and instead use the data itself to guide fitting of psychometric functions. Macke describes a technique—decision-images— for extracting critical stimulus features based on logistic regression and how to use the extracted critical features to generate optimized stimuli for subsequent psychophysical experiments. Wichmann describes how to use “inverse” machine learning techniques to model visual saliency given eye movement data. Maloney discusses the measurement and modeling of super-threshold differences to model appearance and gives several examples of recent applications to surface material perception, surface lightness perception, and image quality. The presentations will outline how these approaches have been adapted to specific psychophysical tasks, including psychometric-function fitting, classification, visual saliency, difference scaling, and conjoint measurement. They show how these modern methods allow experimenters to make better use of data to gain insight into the operation of the visual system than hitherto possible.