Purchase this article with an account.
Kenneth Knoblauch, Laurence Maloney; Classification images estimated by generalized additive models. Journal of Vision 2008;8(6):344. doi: https://doi.org/10.1167/8.6.344.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose. Classification images are typically estimated by a weighted combination of means of the noise profiles from response/signal categories in a psychophysical experiment in which the signal is embedded in noise on a fraction of the trials. This method can be characterized as a linear model (LM). The result is often subsequently smoothed by some arbitrary amount to yield a cleaner image. We describe how to estimate classification images with alternative statistical methods that incorporate smoothing in the estimation process and that result in more accurate estimates, described with fewer parameters. Methods. The classification image observer can be directly modeled trial-by-trial as a Generalized Linear Model (GLM). We describe how to extend the GLM by adding smooth basis terms to the model matrix, to produce a Generalized Additive Model (GAM). The GAM prediction is a smoothed template where the smoothing is chosen to minimize prediction error of the data. We compared the three methods on simulated data for experiments of 100 to 10000 trials and with a 2000-fold variation of noise added to the template. We also compared the methods on published data (Thomas & Knoblauch, 2005) for detection of a Gabor temporal luminance modulation. Results. For simulated data, the GAM method yielded a closer estimate to the underlying template than the other two in the presence of substantial amounts of noise. Interestingly, for the real data, the GAM estimate produced an image closer to the ideal template than the other two. In both cases, the GAM approach required about 1/3 to 1/2 fewer parameters to describe the data. Conclusion. A GAM approach to estimating classification images has the advantage of producing a more parsimonious estimate that is closer to the underlying template.
This PDF is available to Subscribers Only