Abstract
The classification image technique is a method of estimating an observer's internal template on a detection or discrimination task. Originally used in the context of Vernier acuity (Ahumada 1996), this approach has recently been adapted to more complex tasks, including disparity processing (Neri et al. 1999), illusory contour completion (Gold et al. 2000) and face recognition (Sekular et al. 2004). The nature of the procedure limits the number of stimulus dimensions that can be probed, as well as the resolution. We therefore sought an analysis procedure that would maximize the efficiency of the classification image technique. A widely-used approach to statistical testing of classification images is to apply a global threshold, along with a Bonferroni correction, to individual image components, a method which ignores correlations between adjacent image components. More efficient methods are available. For instance, hard thresholding of image components in overcomplete tight frames yields efficient image denoising (Yu et al. 1996). False discovery rate (FDR) testing has been shown to be as conservative as the Bonferroni correction in terms of global type I error, yet less prone to type II errors (Benjamini & Hochberg 1995). We adapted these two methods to the statistical testing of classification images. The hybrid FDR/tight frame method was applied to classification images from a simulated LAM observer, using a variety of idealized observer templates from previously published classification image experiments. The number of trials required to reach a desired Pearson's correlation (0.5) between estimated and true template was typically an order of magnitude lower with the hybrid technique than with Bonferroni thresholding. Improvements were greatest in templates with complex, oriented features, such as faces. These results suggest that the hybrid method improves the efficiency of classification image measurements, particularly in experiments with high-resolution or high-dimensional stimuli.
This research was supported by CIHR.