Abstract

Abstract

**Abstract**:

**Abstract**
Classification images and bubbles images are psychophysical tools that use stimulus noise to investigate what features people use to make perceptual decisions. Previous work has shown that classification images can be estimated using the generalized linear model (GLM), and here I show that this is true for bubbles images as well. Expressing the two approaches in terms of a single statistical model clarifies their relationship to one another, makes it possible to measure classification images and bubbles images simultaneously, and allows improvements developed for one method to be used with the other.

Introduction

Classification images and bubbles images are psychophysical tools that recover some of the features observers use to identify images (Ahumada, 1996, 2002; Gosselin & Schyns, 2001; Murray, 2011). These two methods were invented independently, and despite their similarities, they have developed along independent paths. Previous work has shown that classification images can be calculated using the generalized linear model (GLM), and here I show that this is true for bubbles images as well (McCullagh & Nelder, 1989; Abbey & Eckstein, 2001; Knoblauch & Maloney, 2008; Dobson & Barnett, 2008). Expressing the two approaches in terms of a single statistical model clarifies their relationship to one another, makes it possible to measure classification images and bubbles images simultaneously, and allows improvements developed for one method to also be used with the other.

In broad terms, the classification image and bubbles methods are similar: they both introduce random fluctuations into stimuli and measure the influence of these fluctuations on observers' responses. One important difference is that the most common versions of these methods require different kinds of noise. The theory of classification images has mostly been developed assuming additive Gaussian noise (Ahumada, 2002; Murray, Bennett, & Sekuler, 2002), whereas current methods for bubbles images require sparse multiplicative noise (Gosselin & Schyns, 2001).

Classification images are usually calculated using weighted sums of noise fields (Ahumada, 1996, 2002), but they can also be estimated using the GLM (Knoblauch & Maloney, 2008). The GLM approach has the advantage that it does not make strong assumptions about the stimulus noise. Consider an experiment where the stimulus

**g**is a signal,**s**_{1}or**s**_{2}, shown in an additive noise field**n**and the observer makes an identification response*r*= 1 or*r*= 2. Using the GLM we can model the influence of the noise fields on the observer's responses as follows. Here*Φ*is the standard normal cumulative distribution function._{1}is a vector of regression coefficients that encodes the classification image for**s**_{1}trials and*β*_{1,0}is a constant that encodes the response bias on**s**_{1}trials._{2}and*β*_{2,0}encode the corresponding quantities for**s**_{2}trials. We represent**g**,**s***,*_{i}**n**, and**β**_{ as column vectors, and we use T to denote matrix transposition, so xTy is the dot product of x and y. }Equations (1) and (2) are GLM models of the observer's responses, and many statistical software packages have routines for making maximum-likelihood estimates of the regression coefficients

**β***. The GLM does not make strong assumptions about the statistical distribution of the stimulus noise, so it can be used to measure classification images using noise fields from non-Gaussian distributions (Abbey & Eckstein, 2001) and even from the natural variations in some classes of stimuli, such as faces (Macke & Wichmann, 2010).*_{i}The template matcher, a longstanding model of visual decisionmaking (Peterson, Birdsall, & Fox, 1954; Green & Swets, 1974), is an instance of the GLM. A template matcher performs a two-alternative identification task by taking the dot product of the stimulus

**g**with a template**w**, adding an internal noise*z*that has standard deviation*σ*and making a response according to whether the resulting decision variable exceeds a criterion_{z}*a*: In a classification image experiment with noisy stimuli**g**=**s***+*_{i}**n**, the template matcher's response probabilities are: Comparing Equations (1) and (5) shows that for a template matcher, the GLM's regression weights are**β**_{1}=**w**/*σ*and Equations (2) and (7) show that_{z}**β**_{2}=**w**/*σ*as well. Thus the GLM recovers the template_{z}**w**up to a scale factor.We can also calculate bubbles images using the GLM. In a bubbles experiment the signal

**s**_{1}or**s**_{2}is modulated by a multiplicative noise field**n**, so the stimulus is**s**_{1}∘**n**or**s**_{2}∘**n**, where ∘ is the componentwise (Hadamard) product, (**x**∘**y**)*= x*_{i}*y*_{i}*. The bubbles method assumes that each stimulus region contributes to some degree to an observer's correct responses. Each signal has an associated map of*_{i}*potent information*(Gosselin & Schyns, 2002). The potent information map (i.e., the bubbles image) answers the following question: If we modulate the signal intensity at some location, how does this affect the probability of a correct response? This suggests that to calculate a bubbles image using the GLM, we should regress the correctness of the observer's responses against the multiplicative noise that modulates the stimulus: The GLM bubbles images are then the estimated regression coefficients**β**_{1}and**β**_{2}. (Note that Equations (5) and (7) give the probabilities of response*r*= 2, whereas Equations (8) and (9) give the probabilities of*correct*responses,*r*= 1 or*r*= 2, depending on the signal.)Is this a plausible reformulation of the original bubbles method? For the template matcher in Equation (3), in a bubbles experiment the probability of a correct response is Comparing Equations (8) and (12) shows that for a template matcher, the GLM's regression weights are

**β**_{1}= −**s**_{1}∘**w**/*σ*and Equations (9) and (14) show that_{z}**β**_{2}=**s**_{2}∘**w**/*σ*. For a template matcher, the conventional bubbles method produces a blurred, affinely transformed estimate of −_{z}**s**_{1}∘**w**on trials where signal**s**_{1}is shown, and of**s**_{2}∘**w**on trials where**s**_{2}is shown (Murray & Gold, 2004). Of course we can also blur and affinely transform the estimates from the GLM if we wish to. Thus for a template matcher, the case where the bubbles method is understood best and the only case where the expected value of a bubbles image is known, the suggested GLM approach agrees with the original bubbles method.Simultaneous classification images and bubbles images

The GLM methods outlined above require additive noise for classification images and multiplicative noise for bubbles images. This is less of a constraint than it may seem because stimulus noise can be described as either additive or multiplicative if we choose its distribution accordingly. A signal times multiplicative noise,

**s**_{1}∘**n***, can also be described as a signal plus additive noise,*_{m}**s**_{1}+**n***, if we set*_{a}**n***=*_{a}**s**_{1}∘ (**n***− 1). Similarly, a signal plus additive noise*_{m}**s**_{1}+**n***can be described as a signal times multiplicative noise,*_{a}**s**_{1}∘**n***, if we set*_{m}**n***= (*_{m}**s**_{1}+**n***)/*_{a}**s**_{1}, where / is componentwise division. (This requires**s**_{1}≠ 0, a detail we return to later.) Of course these transformations change the noise distribution, e.g., homogeneous multiplicative noise is transformed into nonhomogeneous additive noise and zero-mean additive noise is transformed into multiplicative noise with a mean of 1. However, the GLM does not require homogeneous noise, or noise with a particular mean value, so we can simply describe stimulus noise as additive to calculate classification images and as multiplicative to calculate bubbles images. This approach amounts to measuring the influence of a transformation of the stimulus pixels on the observer's responses, as has sometimes been done in classification image studies (e.g., Abbey & Eckstein, 2006), rather than the stimulus pixels themselves.To illustrate these methods we ran two simulations, one using white Gaussian noise and one using bubbles noise, and in both simulations we used the GLM to calculate classification images and bubbles images.

Methods

The signals were 20 × 20 pixel vernier patterns (Figure 1) with a lower line at the same location in both signals and an upper line shifted one pixel to the left or right. In the Gaussian noise simulation the stimulus was a randomly chosen signal, shown at 15% Weber contrast in additive white Gaussian noise with zero mean and a pixelwise standard deviation of 25% contrast. In the bubbles noise simulation the stimulus was a randomly chosen signal shown at 15% contrast, multiplied pointwise by ten randomly placed Gaussian bubbles with standard deviations of one pixel. In both simulations the observer made unbiased responses by taking the dot product of a suboptimal template (Figure 1) with the stimulus, adding univariate internal Gaussian noise, and responding “right” if the resulting decision variable was greater than zero and “left” otherwise. This template resembles a typical classification image for human observers in a vernier discrimination task (Beard & Ahumada, 1998). We scaled the template so that its pixelwise sum-of-squares equaled 1.0. The standard deviation of the internal noise was chosen so that the observer gave 72% correct responses; this required a standard deviation of 25% in the Gaussian noise simulation and 3% in the bubbles noise simulation. In both cases this produced an internal-to-external noise ratio of approximately 1 (Burgess & Colborne, 1988). Each simulation ran for 5,000 trials. GLM fits were made using the

*glmfit*function in the MATLAB Statistics Toolbox, with a probit (i.e., inverse cumulative normal) link function and a binomial response distribution (The MathWorks, Natick, MA). The MATLAB code for these simulations is available online as supporting information.Figure 1

Figure 1

Results

Figure 2 shows classification images and bubbles images calculated from Gaussian noise. The first row shows classification images for left-signal and right-signal trials, calculated using the conventional weighted sum method (Ahumada, 1996, 2002). In all classification images shown in this article, white pixels indicate regions where positive-contrast noise makes the observer more likely to respond “right” (i.e., signal 2). The second row shows classification images calculated using the GLM by regressing the observer's responses, coded as left = 0 and right = 1, against the Gaussian noise. As expected, both these approaches recover the observer's template (Figure 1). In a bubbles experiment where the noise bubbles have standard deviation

*σ*, the bubbles image is effectively blurred by a Gaussian kernel of standard deviation $ 2 \sigma b $ (Murray & Gold, 2004). We convolved the classification images in this simulation with a Gaussian kernel of standard deviation $ 2 $ pixels in order to make them comparable to the results of the bubbles simulation reported below, where the bubbles had a standard deviation of 1 pixel._{b}Figure 2

Figure 2

The third row of Figure 2 shows bubbles images calculated using the GLM by regressing the correctness of the observer's responses, coded as incorrect = 0 and correct = 1, against the equivalent multiplicative noise. That is, we took the Gaussian noise fields

**n***, redescribed them as multiplicative noises fields*_{a}**n***= (*_{m}**s***+*_{i}**n***)/*_{a}**s***(using*_{i}**s***=*_{i}**s**_{1}or**s***=*_{i}**s**_{2}as appropriate on individual trials), and regressed the correctness of the observer's responses against these multiplicative noise fields. We ignored stimulus locations where there was no signal (**s***= 0), since at these locations additive noise cannot be redescribed as multiplicative noise; this is not problematic, because bubbles images are empty at no-signal locations anyway, precisely because there is no signal for the noise to modulate. (We return to this point in the Discussion section.) We convolved these GLM bubbles images with a Gaussian kernel of standard deviation $ 2 $ pixels to make them comparable to the results of the bubbles simulation reported below. In all bubbles images shown in this article, white pixels indicate regions where increasing the signal's contrast magnitude makes the observer more likely to give a correct response. The main result of the Gaussian noise simulation is that the GLM bubbles images in the third row of Figure 2 are similar to the conventional bubbles images in the bubbles simulation described next. That is, using Gaussian noise we recovered the bubbles images that the conventional method recovers using sparse multiplicative noise.*_{i}Figure 3 shows classification images and bubbles images calculated from bubbles noise. The first row shows bubbles images calculated using the conventional bubbles method (Gosselin & Schyns, 2001). The second row shows bubbles images calculated using the GLM by regressing the correctness of the observer's responses against the bubbles noise. We convolved the GLM bubbles images with a Gaussian kernel of standard deviation $ 2 $ pixels in order to make them comparable to the conventional bubbles images, and because from the low-frequency noise in this simulation we can only expect to recover low-frequency components of the bubbles image. The GLM bubbles images are similar to the conventional bubbles images, and both types are similar to the bubbles images recovered from Gaussian noise in the previous simulation.

Figure 3

Figure 3

To make a more exact comparison, we plotted each bubbles image, pixelwise, against the theoretically expected bubbles image,

**b**∗**b**∗ (**s***∘*_{i}**w**), where**b**is the bubble used to create the bubbles noise, and ∗ is two-dimensional convolution (Murray & Gold, 2004; Gosselin & Schyns, 2004). We first subtracted a scalar from each bubbles image so that its mean background value, far from the signal, was zero. We fitted symmetric power functions*y*=*k*·sign(*x*)|*nx*|*to the scatterplots, where sign(*^{n}*x*) equals +1 when*x*≥ 0 and −1 otherwise. Figure 4 shows that all three methods matched the theoretical bubbles image reasonably well. The conventional bubbles image differed from the theoretical image by a compressive nonlinearity, whereas the two GLM bubbles images were closer to being directly proportional.Figure 4

Figure 4

Finally, the third row of Figure 3 shows classification images calculated using the GLM by regressing the observer's responses against the equivalent additive noise. That is, we took bubbles noise fields

**n***, redescribed them as additive noises fields*_{m}**n***=*_{a}**s***∘ (*_{i}**n***− 1), and regressed these additive noise fields against the observer's responses. We convolved the classification images with a Gaussian kernel of standard deviation $ 2 $ pixels. The GLM classification images recovered from bubbles noise are similar to the classification images recovered from Gaussian noise, within the constraint that in this simulation there was no noise in locations where there was no signal, and so naturally we did not recover the observer's template at those locations. Thus the classification image from left-signal trials does not show the upper-right lobe of the observer's template, the image from right-signal trials does not show the upper-left lobe, and neither image completely recovers the bottom two lobes.*_{m}Discussion

The GLM allows us to describe classification images and bubbles images in a common framework and illustrates the strong similarities between them. A classification image is found by regressing the observer's

*responses*against stimulus noise, expressed as*additive*contrast fluctuations. A bubbles image is found by regressing the*correctness*of the observer's responses against stimulus noise, expressed as*multiplicative*contrast fluctuations. Over the subset of trials where a single signal was shown, the difference between*the observer's responses*and*the correctness of the observer's responses*is trivial: on trials where signal 1 was shown, a “signal 1” response is correct and a “signal 2” response is incorrect. Thus the more substantial difference is that the classification image measures the influence of stimulus fluctuations expressed as additive noise and a bubbles image measures the influence of stimulus fluctuations expressed as multiplicative noise. For this reason, it seems unlikely that the two methods will reveal fundamentally different information about observers' strategies. Viewed as regression methods, they simply measure the covariates differently. This similarity is not a byproduct of formulating the methods in terms of the GLM; in the Appendix we show that it can seen in the original methods as well.A more important difference between classification image and bubbles methods, as they have been practiced, is that they have used different kinds of noise. The GLM gives us more freedom in this respect; it allows us to partly disentangle our analysis from the kind of noise we use. As we have shown, we can measure both classification images and bubbles images using Gaussian white noise. There are limits to this freedom of course, as for instance in the bubbles simulation we were unable to recover classification images at stimulus locations where there was no noise. Also, the GLM approach we have suggested for measuring bubbles images where we ignore noise at nonsignal locations does not work well with noise that is correlated across signal- and nonsignal locations. Suppose we use low-pass Gaussian noise that is correlated at two nearby locations,

*x*_{1}and*x*_{2}, and there is a signal at*x*_{1}but not at*x*_{2}. If we ignore the noise at*x*_{2}, then the GLM will attribute any effect that it has on the observer's responses to the correlated noise at*x*_{1}, introducing an artifact.Within such limits, different types of noise may have various advantages, e.g., we may use bubbles noise in experiments where we wish to show only realistic stimulus fragments (Felsen & Dan, 2005; Rust & Movshon, 2005), or we may use Gaussian white noise to include a wider range of stimulus fluctuations.

The GLM has made several advances possible in classification image methods, including examining strategies in 1-of-

*n*identification tasks, incorporating priors, using statistical tests associated with the GLM, allowing new kinds of noise, and opening the way to related frameworks such as the generalized additive model (Abbey & Eckstein, 2001; Knoblauch & Maloney, 2008; Macke & Wichmann, 2010; Mineault, Barthelmé, & Pack, 2009; Murray, 2011). With the GLM, these advances can be used for calculating bubbles images as well.Acknowledgments

This work was funded by the Natural Sciences and Engineering Research Council of Canada and the Canada Foundation for Innovation. I thank Kenneth Knoblauch, Christopher Taylor, and an anonymous reviewer for helpful comments.

Commercial relationships: none.

Corresponding author: Richard F. Murray.

Email: rfm@yorku.ca.

Address: Centre for Vision Research, York University, Toronto, ON, Canada.

References

Abbey
C. K.
Eckstein
M. P
.
(2001). Maximum-likelihood and maximum-a-posteriori estimates of human observer templates.

*Proceedings of SPIE**,*4324*,*114–122.
Abbey
C. K.
Eckstein
M. P
.
(2006). Classification images for detection, contrast discrimination, and identification tasks with a common ideal observer.

*Journal of Vision**,*6(4):4, 335–355, http://www.journalofvision.org/content/6/4/4, doi:10.1167/6.4.4. [PubMed] [Article]
Ahumada
A. J.Jr
.
(1996). Perceptual classification images from Vernier acuity masked by noise [Abstract].

*Perception**,*26, 18.
Ahumada
A. J.Jr
.
(2002). Classification image weights and internal noise level estimation.

*Journal of Vision**,*2(1):8, 121–131, http://www.journalofvision.org/content/2/1/8, doi:10.1167/2.1.8. [PubMed] [Article]
Beard
B. L.
Ahumada
A. J.Jr
.
(1998). A technique to extract relevant image features for visual tasks.

*Proceedings of SPIE**,*3299, 79–85.
Burgess
A. E.
Colborne
B
.
(1988). Visual signal detection. IV. Observer inconsistency.

*Journal of the Optical Society of America A**,*5*,*617–627.
Dobson
A. J.
Barnett
A. G
.
(2008).

*. Boca Raton, FL: Chapman and Hall/CRC.**An introduction to generalized linear models*(3^{rd}ed.)
Felsen
G.
Dan
Y
.
(2005). A natural approach to studying vision.

*Nature Neuroscience**,*8, 1643–1646.
Gosselin
F.
Schyns
P. G
.
(2001). Bubbles: A technique to reveal the use of information in recognition tasks.

*Vision Research**,*41*,*2261–2271.
Gosselin
F.
Schyns
P. G
.
(2002). RAP: A new framework for visual categorization.

*Trends in Cognitive Sciences**,*6, 70–77.
Gosselin
F.
Schyns
P. G
.
(2004). No troubles with bubbles: A reply to Murray and Gold.

*Vision Research**,*44*,*471–477.
Green
D. M.
Swets
J. A
.
(1974).

*. Huntington, NY: R. E. Krieger Publishing Company. (Original work published 1966).**Signal detection theory and psychophysics*
Knoblauch
K.
Maloney
L. T
.
(2008). Estimating classification images with generalized linear and additive models.

*Journal of Vision**,*8(16):10, 1–19, http://www.journalofvision.org/content/8/16/10, doi:10.1167/8.16.10. [PubMed] [Article]
Macke
J. H.
Wichmann
F. A
.
(2010). Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces.

*Journal of Vision**,*10(5):22, 1–24, http://www.journalofvision.org/content/10/5/22, doi:10.1167/10.5.22. [PubMed] [Article]
McCullagh
P.
Nelder
J
.
(1989).

*. Boca Raton, FL: Chapman and Hall/CRC.**Generalized linear models*(2^{nd}ed.)
Mineault
P. J.
Barthelmé
S.
Pack
C. C
.
(2009). Improved classification images with sparse priors in a smooth basis.

*Journal of Vision**,*9(10):17, 1–24, http://www.journalofvision.org/content/9/10/17, doi:10.1167/9.10.17. [PubMed] [Article]
Murray
R. F
.
(2011). Classification images: A review.

*Journal of Vision**,*11(5):2, 1–25, http://www.journalofvision.org/content/11/5/2, doi:10.1167/11.5.2. [PubMed] [Article]
Murray
R. F.
Bennett
P. J.
Sekuler
A. B
.
(2002). Optimal methods for calculating classification images: Weighted sums.

*Journal of Vision**,*2(1):6, 79–104, http://www.journalofvision.org/content/2/1/6, doi:10.1167/2.1.6. [PubMed] [Article]
Murray
R. F.
Gold
J. M
.
(2004). Troubles with bubbles.

*Vision Research**,*44, 461–470.
Peterson
W. W.
Birdsall
T. G.
Fox
W. C
.
(1954). The theory of signal detectability.

*Transactions of the IRE Professional Group on Information Theory**,*4*,*171–212.
Richards
V. M.
Zhu
S
.
(1994). Relative estimates of combination weights, decision criteria, and internal noise based on correlation coefficients.

*Journal of the Acoustical Society of America**,*95, 423–434.
Rust
N. C.
Movshon
J. A
.
(2005). In praise of artifice.

*Nature Neuroscience**,*8, 1647–1650.
Smith
M. L.
Cottrell
G. W.
Gosselin
F.
Schyns
P. G
.
(2005). Transmitting and decoding facial expressions.

*Psychological Science**,*16, 184–189.Appendix

The conventional classification image method can be seen as measuring the covariance between additive zero-mean stimulus noise and the observer's responses (Richards & Zhu, 1994; Murray, 2011). If we encode the observer's responses as

*r*= −1 and*r*= +1, This is the usual formula for a classification image over all trials where one signal is shown (Ahumada, 1996, 2002). The conventional bubbles image can be seen as measuring the covariance between multiplicative noise and the observer's responses. If we encode responses as*r*= 0 and*r*= 1, The usual bubbles image is*E*[**n**|*r*= 1], which differs from this covariance only by a constant (Gosselin & Schyns, 2001).This is all elementary, but it shows that the two methods can be seen as doing very similar analyses, one on additive noise and the other on multiplicative noise, just as in the GLM versions.

As a result, so long as the noise elements are weak enough that they bias the observer's response only slightly, there will be a close relationship between classification images and bubbles images. Coding responses as

*r*= 1 and*r*= 2 as in the main text, suppose the observer's response probability on trials where the signal is**s**_{2}can be approximated as Here*p*_{2}is the probability of the observer giving response*r*= 2 without noise,**n**is a vector of additive noise, and**u**is a column vector that describes how each noise element perturbs the response probability. A template matcher in weak noise can be approximated as in Equation (A6), and in this case**u**is proportional to the template**w**.Equations (A9) and (A13) show that under quite general conditions, classification images and bubbles images are closely related. They differ only in that Equation (A9) has

**u**where Equation (A13) has**s**_{2}∘**u**and Equation (A13) includes the mean noise field times a scalar. Thus when we measure a classification image using additive noise we get a function of the observer's coefficients**u**. (The function depends on what noise distribution we use, e.g., if we use Gaussian white noise then Equation (A9) is proportional to**u**.) When we measure a bubbles image using multiplicative noise we get the*same*function of the pointwise scaled coefficients**s**_{2}∘**u**(plus the mean noise field times a scalar). This similarity does not require strong assumptions about the observer, but follows from (a) the fact that in classification image experiments we use noise additively and in bubbles experiments we use it multiplicatively and (b) the assumption that individual noise elements have small enough effects that we can describe the observer's response probabilities using the linear approximation in Equation (A6).Footnotes

^{1}The GLM does make assumptions about other noise distributions. The GLM given by Equations (1) and (2) assumes that the decision variable is the linear predictor

**n**

*+*

^{T}**β**_{i}*β*

_{i}_{,0}plus a noise source

*ϵ*that is independent from trial to trial, has zero mean, and has a fixed variance that is independent of the linear predictor. It also assumes that the observer's responses are independent Bernoulli random variables.

Footnotes

^{2}Equations (1) and (2) assume that the observer's decision variable is a locally linear function of the stimulus in the neighborhoods of the two signals. Because the stimuli are noisy, it could occasionally happen that

**s**

_{1}+

**n**is closer to

**s**

_{2}than to

**s**

_{1}. A reviewer pointed out that instead of using all stimuli from

**s**

_{1}trials as covariates in Equation (1), another possibility would be to use all stimuli that are physically closer to

**s**

_{1}than to

**s**

_{2}as covariates in Equation (1), regardless of whether they are from

**s**

_{1}trials or

**s**

_{2}trials. Similar comments apply to the covariates in Equation (2).

Footnotes

Footnotes

^{4}The method we have used does not incorporate a prior, so it requires at least as many trials as there are pixels in the bubbles image in order for there to be a unique maximum-likelihood solution. Several methods for imposing priors and other kinds of regularization have been developed for classification images and these can easily be adapted to bubbles images as well (Murray, 2011, pp. 9–10).