**Abstract**:

**Abstract**
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N_{1}, and N_{3+}) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur.

_{1}and N

_{3}

^{+}models (Georgeson et al., 2007), based upon the theoretical work of Lindeberg (1994, 1998), also assume that the image is analyzed by a bank of filters of different scales. In the N

_{1}and N

_{3}

^{+}models, the location and blur of an edge are found by looking for peaks in a scale-space representation of the image (Lindeberg, 1998; Witkin, 1983). Other models of edge detection (Elder & Zucker, 1998) also embody the idea that to detect edges, the image must be analyzed at different scales.

_{1}model (Georgeson et al., 2007), and the N

_{3}

^{+}model (Georgeson et al., 2007), to our data by defining appropriate decision variables for those models. We find that none of them fits our blur detection data very well. This is surprising because these models have received support from many experiments—blur thresholds (Watt & Morgan, 1983, 1985), blur matching tasks (Georgeson, 1994; Georgeson et al., 2007; May & Georgeson, 2007a, 2007b), and the reported perception of edge location (Georgeson & Freeman, 1997; Hesse & Georgeson, 2005).

^{2}.

*i*th trial was

*s*(

_{i}*x*,

*y*) =

*S*(

*x*) +

*n*(

_{i}*x*), where

*S*(

*x*) is a sharp step edge profile and

*n*(

_{i}*x*) is a Gaussian white noise sample. (Here we use

*x*to refer to the vertical dimension on the stimulus, and

*y*to refer to the horizontal dimension.) The contrast pattern of the blurred edge on the

*i*th trial was

*b*(

_{i}*x*,

*y*) =

*B*(

*x*) +

*m*(

_{i}*x*), where

*B*(

*x*) is a blurred edge formed by convolving a step edge profile with a Gaussian filter having scale

*σ*, and

*m*(

_{i}*x*) is another noise sample. Contrast is defined as the luminance at a point divided by the mean luminance, minus 1. Both sharp and blurred edges had a contrast difference across the edge of 0.4 (i.e., a Michelson contrast of 0.2). Noise was created by adding an independent pseudorandom noise value to each scan line of each edge image. The noise values were drawn from a Gaussian distribution with a standard deviation of either 0.16 (low-noise condition) or 0.32 (high noise condition) in contrast units. The one-dimensional spectral power densities were 0.8 × 10

^{−4}deg

^{−1}and 3.2 × 10

^{−4}deg

^{−1}, respectively. Before adding the noise signal to the edge, we truncated it to fall between ±0.8, so that the combined signal fell within the range of physically realizable values, [−1, 1]. Each noise sample was stored for classification image analysis as described below. These images are constant in the

*y*direction, so the

*y*coordinate is ignored from here on.

*I*

_{1}and

*I*

_{2}, and must decide which contains the blurred edge. One way they could do this is to compute a weighted sum of the contrasts in each image, and select the image which maximizes this sum as being the most blurred. The vector of weights is called a template. The weighted sum of image

*j*is where

*θ*is the template vector, indexed by position

*x*. The observer decides image

*I*

_{1}contains the blurred edge if

*θ*·

*I*

_{1}−

*θ*·

*I*

_{2}> 0; otherwise they decide image

*I*

_{2}was blurred. The difference

*θ*· (

*I*

_{1}−

*I*

_{2}) is called a decision variable.

*i*when

*θ*· (

*b*−

_{i}*s*) > 0, where the actual blurred and sharp images on that trial have been substituted into the decision variable. However, human observers often make a different choice when shown the same stimulus again, which must be caused by some internal randomness, or noise, unrelated to the stimuli. If we assume, for convenience, that the internal noise is a standard logistic variable, the observer's probability correct on the

_{i}*i*th trial is a logistic function of the decision variable: Now, let

*c*be 1 if the human observer actually was correct on the

_{i}*i*th trial of the experiment, and 0 otherwise. The log-likelihood of the observer's responses, given the template

*θ*, is then An estimate of the template

*θ*is a called a classification image. Classification images are commonly computed from the difference between the mean noise pattern when the observer is correct and the mean noise pattern when the observer is incorrect (Ahumada, 2002; Beard & Ahumada, 1998; Murray, 2011; Murray, Bennett, & Sekuler, 2002). However, we used the maximum likelihood estimate for

*θ*, which can be computed by logistic regression (Knoblauch & Maloney, 2008; Nelder & Wedderburn, 1972). The covariate matrix used in the logistic regression is

*X*=

_{i,j}*b*(

_{i}*j*) −

*s*(

_{i}*j*), where the

*i*th row of

*X*contains the difference between the blurred and sharp stimuli on the

*i*th trial. This was regressed against the observation vector

*c*.

_{i}*λ*. The AIC is a model selection measure that takes into account both the likelihood of a model and its complexity. It is defined as −2

*L*(

*θ*) + 2

*N*(

*θ*), where

*N*(

*θ*) is the effective number of parameters. The effective number of parameters is the trace of the projection matrix of the logistic regression on the final convergent iteration (Hastie & Tibshirani, 1986) and is reduced as the smoothing increases. The magnitude of the AIC is not meaningful, but differences between AICs are (Burnham & Anderson, 2004). When selecting amongst models, the one with the lowest AIC is to be preferred, so for the smoothing parameter, we chose the value of

*λ*which minimized the AIC.

*d*(

*I*

_{1},

*I*

_{2},

*φ*) of the two stimuli and a set of parameters

*φ*. The observer will choose stimulus

*I*

_{1}as being the blurred edge if

*d*(

*I*

_{1},

*I*

_{2},

*φ*) > 0; otherwise they will choose stimulus

*I*

_{2}. Given a decision variable for a particular model, the probability of a correct response is simply

*k*is needed because of the assumption that the internal noise is a standard logistic variable. This probability correct is then substituted into Equation 3, and the model parameters ϕ can be estimated by maximum likelihood. This approach is an extension of that used by Solomon (2002), who fitted parameterized templates by maximum likelihood. Here, however, we fit entire models.

_{1}and N

_{3}

^{+}models (Georgeson et al., 2007), and an optimal edge detector (McIlhagga, 2011) to our blur detection data. Some models of blur detection that focus on predicting blur thresholds (e.g., Watson & Ahumada, 2011) give the magnitude of a decision variable, but not its sign. These kinds of models are not intended to be used for trial-by-trial modeling and so we did not attempt to fit them.

Subject: | KAM | TS | WHM | |||

Noise contrast | 0.16 | 0.32 | 0.16 | 0.32 | 0.16 | 0.32 |

1) Smoothed classification image | 4494 (N = 76) | 5800 (N = 72) | 5083 (N = 34) | 5038 (N = 48) | 4291 (N = 75) | 5439 (N = 50) |

ΔAIC for | ||||||

2) Unsmoothed classification image (N = 400) | 349 | 322 | 264 | 363 | 307 | 387 |

3) Ideal observer (N = 1) | 654 | 570 | 630 | 1053 | 703 | 978 |

4) MIRAGE (N = 1) | 1440 | 998 | 1566 | 1857 | 1778 | 1438 |

5) N_{1} model (N = 2) | 502 | 523 | 560 | 911 | 957 | 902 |

6) N_{3}^{+} model (N = 2) | 1744 | 994 | 1312 | 1695 | 1992 | 1332 |

7) Optimal edge detector, Bayesian (N = 6) | −178 | −274 | 94 | −1 | −72 | −90 |

*d*(

_{human}*I*

_{1},

*I*

_{2}) it can be expanded as a Taylor series in the stimulus contrasts

*I*

_{1}and

*I*

_{2}. The first order term of this Taylor series is a linear combination of stimulus contrasts

*I*

_{1}and

*I*

_{2}, like the classification image. Thus the classification image can be thought of as an estimate of the first-order term of the true decision variable. This means that the AIC of the smoothed classification image can be used as a benchmark for accepting or rejecting alternative models for human blur detection. If some alternative model does not have a better AIC than the classification image, then it is worse than a first order approximation to the true human decision variable. In that case it is unlikely to be correct. Using this criterion, we can evaluate other possible models for human blur detection. We turn to this next.

*I*

_{1}and

*I*

_{2}will do so by computing two log-likelihoods. The first is the log-likelihood that image

*I*

_{1}contains the blurred edge and

*I*

_{2}contains the sharp edge. The second is the log likelihood of the alternative possibility that image

*I*

_{1}contains the sharp edge and

*I*

_{2}contains the blurred edge. They choose the alternative that has the highest likelihood as being the one most likely to be correct. It can be shown that, in additive Gaussian noise, this is equivalent to computing a linear decision variable

*θ*· (

_{ideal}*I*

_{1}−

*I*

_{2}), where the ideal template is proportional to the difference between the blurred and sharp edges,

*θ*(

_{ideal}*x*) =

*k*[

*B*(

*x*) −

*S*(

*x*)], where

*k*is a scaling factor.

*θ*into Equation 3. In doing so, we are implicitly adding internal noise to the ideal observer in order to improve their fit. There is one free parameter here, the scaling factor

_{ideal}*k*. The ΔAIC values for the ideal observer, compared to the smoothed classification image, are shown in row 3 of Table 1. In all cases, the ideal observer is substantially worse than the best classification image and so is highly unlikely to be a correct account of human performance in this task.

_{1}and N

_{3}

^{+}models

_{1}and N

_{3}

^{+}models yield a scale-space representation of the input image (Witkin, 1983), which is a representation of the image over a range of scales. The scale filters

*F*(

*x*,

*σ*) in the N

_{1}model are normalized derivatives of Gaussians The normalization exponent

*p*affects which filter responds best to an edge with a particular blur. The edges in the image are found by looking for local maxima, or peaks, in the scale space. The spatial coordinates of the local maximum gives the location of the edge, and the scale coordinate of the local maximum is proportional to the blur of the edge. A Gaussian blurred edge with scale

*σ*will be detected by a filter with scale

_{e}*σ*$p/(n\u2212p)$, where

_{e}*n*= 1 for the N

_{1}model and

*n*= 3 for the N

_{3}

^{+}model;

*p*=

*n*/2 is a conventional choice here, but anything between 0 and

*n*is valid.

*Scale*(

*x*,

*σ*) be the scale space produced from an input image by either the N

_{1}or N

_{3}

^{+}filters. Stimulus noise generates many peaks in the scale space, and we have to select the one that corresponds to the sharp or blurred edge. We choose the peak that has the greatest edge contrast. To do this, we find all peaks in scale space and multiply the height of each by a correction factor to get the contrast of the edge, and then choose the one with the highest contrast. Note that the peaks are found before the correction factor is applied. The correction factor for N

_{3}

^{+}is derived in May and Georgeson (2007a), equation 2; the correction factor for N

_{1}is derived similarly.

*I*

_{1}is at position

*x*

_{1}and scale

*σ*

_{1}, our estimate for edge blur is simply

*σ*

_{1}$(n\u2212p)/p$. An obvious decision variable is then

*d*(

*I*

_{1},

*I*

_{2}) = (

*σ*

_{1}–

*σ*

_{2})$(n\u2212p)/p$. We used this decision variable to fit the N

_{1}and N

_{3}

^{+}models to observer responses. The scales ranged from 1 to 60 pixels (0.0042° to 0.252°), logarithmically spaced. The exact choice of scales had only minor influence on the fit. We assumed the observer knew the location of the edge and only had to find the peak in scale. (Relaxing this assumption worsened the fit.) ΔAIC values for the N

_{1}and N

_{3}

^{+}models are given in Table 1, rows 5 and 6. These AIC values were obtained by finding the normalization exponent

*p*which yielded the smallest AIC. Neither N

_{1}nor N

_{3}

^{+}fits the data very well, when compared to the fit of the smoothed classification image. The main reason for the poor fit was that in both models, the scale space was overwhelmed by noise peaks.

*D*for an edge of scale

_{σ}*σ*(here defined as a step edge convolved with a Gaussian filter of scale

*σ*) can be approximated by a convolution of three filters (McIlhagga, 2011) where

*W*(

*x*) is a whitening filter,

*g*(

*x*,

*σ*

_{0}) is an auxiliary Gaussian filter with a fixed scale

*σ*

_{0}and

*M*(

_{σ}*x*) is a filter matched to the shape of an edge of scale

*σ*after it has been whitened. The matched filter

*M*(

_{σ}*x*) is normalized to have an r.m.s. power of 1. The whitening filter

*W*(

*x*) whitens images having a natural-image power spectrum

*C*

^{2}/

*f*

^{2 +}$n02$ (Burton & Moorhead, 1987; Field, 1987), where

*C*

^{2}/

*f*

^{2}is brown noise and $n02$ is the squared amplitude of the white noise. The whitening filter acts like a smoothed derivative operator. The optimal detector has two parameters, the ratio

*C*/

*n*

_{0}, which is estimated from the image, and the scale

*σ*

_{0}for the auxiliary Gaussian filter, which should be small. The optimal edge detector is diagrammed in Figure 3.

*D*(

_{σ}*x*) with an image

*I*(

*x*) represents the image at a single scale. To represent the image at all scales, we must convolve an image

*I*(

*x*) with optimal edge detectors at different scales. The collection of these convolutions is a scale-space representation of the image

*R*(

*x*,

*σ*), given by The square of the scale space

*R*(

*x*,

*σ*)

^{2}is related to the log likelihood of observing the image

*I*(

*x*) given there is an edge at position

*x*with scale

*σ*(McIlhagga, 2011): If all locations and blurs are equally probable, the maximum of

*R*(

*x*,

*σ*)

^{2}gives the location and blur associated with the most probable edge.

*R*(

*x*,

*σ*)

^{2}with their prior distribution of edge location and scale. For simplicity, we will assume the observer knows the edge position exactly, and so consider only the scale coordinate. A Bayesian observer who views two stimulus images

*I*

_{1}and

*I*

_{2}in our experiment may hypothesize that image

*I*

_{1}contains a blurred edge with scale

*σ*, and image

_{b}*I*

_{2}contains a sharp edge with scale

*σ*. Letting

_{s}*π*(

*σ*,

_{b}*σ*) be the observer's prior probability for this hypothesis, the log posterior probability is where

_{s}*R*

_{1}and

*R*

_{2}are the scale space representations of images

*I*

_{1}and

*I*

_{2}at spatial position

*x*= 0. Alternatively, the observer may hypothesise that image

*I*

_{2}contains a blurred edge with scale

*σ*′, and image

_{b}*I*

_{1}contains a sharp edge with scale

*σ*′. The log-probability of this hypothesis is when $\sigma b\u2032$ > $\sigma s\u2032$. The constant in this equation is identical to the one in Equation 11. The optimal decision rule is to decide that image

_{s}*I*

_{1}contains the blurred edge and

*I*

_{2}the sharp edge when However, this calculation is computationally expensive, since one would have to take the scale space outputs, exponentiate them, then integrate them. In addition, it does not directly yield an estimate of the edge blur, for which one would have to compute the posterior mean. (It did not fit the data particularly well either.)

*I*

_{1}) when the maximum posterior probability of the first hypothesis exceeds the maximum posterior probability of the second; that is, when The decision variable for this model,

*d*(

*I*

_{1},

*I*

_{2}), is simply the left hand side of this inequality. It can be easily computed from the output of the optimal edge detector, and estimates of the edge blurs are immediately available as the values of

*σ*,

_{b}*σ*or $\sigma b\u2032$,$\sigma s\u2032$ which yielded the maximum.

_{s}*π*and

_{b}*π*are priors for the sharp and blurred edge scales at the known position

_{s}*x*= 0. This prior is the observer's belief about the distribution of scale, not the true distribution. The sharp and blurred edge priors were modeled as beta distributions because these are flexible distributions with a finite domain.

*k*, the auxiliary blur in the optimal filters

*σ*

_{0}, and two beta parameters for each of the two priors

*π*and

_{b}*π*. The value of

_{s}*σ*

_{0}was fitted individually by subject and noise level to provide the best fit. No attempt was made to enforce consistency of the parameters within subject. The whitening parameter

*C*/

*n*

_{0}is not a free parameter and was estimated for each subject and noise level from the collection of all stimuli shown to that subject. The same set of scales adopted for the N

_{1}and N

_{3}

^{+}models were used here (up to 60 pixels, or 0.187°), except that the scales for subject TS extended out to 80 pixels (0.25°). The choice of scales affects the AIC only marginally. Fitting of the free parameters for the optimal detector was difficult, and we adopted a semi-Monte Carlo method, in which a Nelder-Mead minimization routine (routine

*fmins*in Matlab) was started at many randomly selected initial values, and the best one selected.

*d*of the decision variable, we selected a subset of trials in an interval around

*d*. We then measured the observer's probability correct over this subset of trials. If the model fits the observer responses, we would expect that the observer's probability correct should be a smooth logistic function of the model's decision variable.

*p*=

_{i}*p*(

*b*,

_{i}*s*,

_{i}*θ*) be the probability correct for the

*i*th trial, as specified by the model. Let

*c*be 1 or 0 depending on whether the observer was actually correct. The observed likelihood of the model is

_{i}*L*= Σ

_{obs}*c*log

_{i}*p*+ (1 +

_{i}*c*) log(1 −

_{i}*p*). Now simulate an observer by setting $cisim$ equal to 1 if a uniform random variable

_{i}*r*is less than

_{i}*p*. The likelihood of the simulated observer is

_{i}*L*= Σ$cisim$log

_{sim}*p*+ (1 − $cisim$) log(1 −

_{i}*p*). We can repeat this many times to find the empirical distribution of simulated likelihoods conditional on the model probabilities, i.e., the distribution of observed likelihoods that would occur if the model was precisely correct. If the observed likelihood

_{i}*L*is consistent with being drawn from this simulated distribution, then the model fits the data.

_{obs}*L*invariably fell between the 40th and 60th percentiles of the simulated likelihood distribution, so the observer responses are entirely consistent with the optimal edge detector model for all observers and noise levels.

_{obs}*C*/

*n*

_{0}.

*W*(

*x*) and the matched filter

*M*(

_{σ}*x*) both change in response to changes in image statistics. The main use for this adaptive change in the experiments reported here is to cope with large amounts of noise, but in other circumstances it adjusts the edge detectors to follow the image statistics. In particular, the optimal edge detector will adapt to image blur. Humans also adapt to image blur (Webster, Georgeson, & Webster, 2002). The optimal edge detector model suggests that the adaptive process occurs in order to optimize the edge detection performance, and this is consistent with reports that blur sensitivity improves after adaptation to blur (Cufflin, Mankowska, & Mallen, 2007).

_{1}model. When the white noise is zero, the whitening filter

*W*(

*x*) becomes a derivative operator, and the matched filter

*M*(

_{σ}*x*) becomes a Gaussian function. Under these conditions, the optimal filter

*D*(

_{σ}*x*) is a derivative of a Gaussian, which is the filter shape suggested by Lindeberg (1998) and the N

_{1}model (Georgeson et al., 2007), among others. Given that the optimal edge detector is so similar to the N

_{1}model, perhaps an N

_{1}model with a Bayesian prior might fit the data better than the simple N

_{1}model we used. We added the same form of Bayesian prior as used in the optimal model to the N

_{1}model with normalization exponent

*p*= 0.5 (This normalization is needed for the N

_{1}filter outputs to be interpreted as likelihoods.) This Bayes-N

_{1}model yielded ΔAIC values of 1012, −20, 630, 57, −42, and 10 (in the same order as the columns of Table 1). While never better than the optimal model, the Bayes-N

_{1}model does beat the smoothed classification image in two cases.

_{1}model is good at accounting for human blur perception when noise is absent, but the N

_{3}

^{+}model is better (Georgeson et al., 2007). The N

_{3}

^{+}model is nonlinear, so its success implies that human blur perception is not like the linear model proposed here. However, we do not have an optimal theory for nonlinear filters like those used in N

_{3}

^{+}, so we do not yet know if a nonlinear edge detector would account for our data better than the current model. Certainly, the N

_{3}

^{+}model as it stands is unable to account for our data. It is possible that human edge detection behaves like a set of optimal linear filters for high noise levels, as here, but transitions to a nonlinear detector like N

_{3}

^{+}at very low noise levels.

*, 2(1):8, 121–131. http://www.journalofvision.org/content/2/1/8, doi:10.1167/2.1.8. [PubMed] [Article] [CrossRef]*

*Journal of Vision*

*Automatic Control, IEEE Transactions on**,*19(6):716–723. doi:10.1109/TAC.1974.1100705. [CrossRef]

*Neural Computation*

*,*4:196–210. [CrossRef]

*. (pp. 79–85). Presented at the Human Vision and Electronic Imaging III, San Jose, CA, USA. doi:10.1117/12.320099.*

*Proceedings of SPIE*

*Sociological Methods & Research**,*33(2):261–304. doi:10.1177/0049124104268644. [CrossRef]

*Applied Optics**,*26(1):157–170. doi:10.1364/AO.26.000157. [CrossRef] [PubMed]

*IEEE Trans. Pattern Analysis Machine Intelligence**,*8(6):679–698. [CrossRef]

*Investigative Ophthalmology & Visual Science**,*48(6):2932–2939, http://www.iovs.org/content/48/6/2932, doi:10.1167/iovs.06-0836. [PubMed] [Article] [CrossRef] [PubMed]

*IEEE Trans. Pattern Analysis Machine Intelligence**,*20(7):699–716. [CrossRef]

*A, Optics and Image Science**,*4(12):2379–2394. [CrossRef]

*Vision Research**,*38(19):2869–2879. doi:10.1016/S0042-6989(98)00087-X. [CrossRef] [PubMed]

*Physica Scripta**,*39(1):153–160. doi:10.1088/0031-8949/39/1/025. [CrossRef]

*Vision Research**,*51(7):771–781. doi:10.1016/j.visres.2010.09.027. [CrossRef] [PubMed]

*Ciba Foundation Symposium**,*184:147–165. discussion 165–169, 269–271. [PubMed]

*Vision Research**,*37(1):7, 127–142, http://www.journalofvision.org/content, doi:10.1167/7.13.7. [PubMed] [Article] [CrossRef]

*, 7(13):1–21. [CrossRef] [PubMed]*

*Journal of Vision*

*Journal of the Optical Society of America**,*71(4):448–452. doi:10.1364/JOSA.71.000448. [CrossRef]

*Statistical Science**,*1(3):297–318. [CrossRef]

*Biological Sciences**,*231(1263):251–288. doi:10.1098/rspb.1987.0044. [CrossRef] [PubMed]

*, 45(4):507–525. doi:10.1016/j.visres.2004.09.013. [CrossRef] [PubMed]*

*Vision Research**, 8(16):10, 1–19. http://www.journalofvision.org/content/8/16/10, doi:10.1167/8.16.10. [PubMed] [Article] [CrossRef] [PubMed]*

*Journal of Vision*

*IEEE Trans. Pattern Analysis Machine Intelligence**,*16(12):1207–1212. [CrossRef]

*Vision Research**,*26(6):957–971. doi:10.1016/0042-6989(86)90153-7. [CrossRef] [PubMed]

*Statistics in Medicine**,*21(24):3789–3801. doi:10.1002/sim.1421. [CrossRef] [PubMed]

*, 11(4):475–480. doi:16/S0959-4388(00)00237-3. [CrossRef] [PubMed]*

*Current Opinion in Neurobiology*

*Current Biology**,*13(6):493–497. doi:16/S0960-9822(03)00135-0. [CrossRef] [PubMed]

*Neural Computation**,*8(3):531–543. doi:10.1162/neco.1996.8.3.531. [CrossRef] [PubMed]

*Journal of Applied Statistics**,*21(2):225–270. [CrossRef]

*Int. J. Comput. Vision**,*30(2):79–116. doi:10.1023/A:1008045108935. [CrossRef]

*. Cambridge, MA: MIT Press.*

*Vision: A computational investigation into the human representation and processing of visual information*

*Proceedings of the Royal Society of London. Series B. Biological Sciences**,*207(1167):187 −217. doi:10.1098/rspb.1980.0020. [CrossRef]

*Journal of the Optical Society of America*.

*A, Optics, Image Science, and Vision**,*13(4):681–688. [CrossRef]

*Proceedings: Biological Sciences**,*263(1367):169–172. [CrossRef]

*Perception**,*26(9):1147–1158. doi:10.1068/p261147. [CrossRef] [PubMed]

*Vision Research**,*40(25):3501–3506. doi:16/S0042-6989(00)00178-4. [CrossRef] [PubMed]

*Vision Research**,*47:1705–1720. [CrossRef]

*Vision Research**,*47(13):1721–1731. doi:10.1016/j.visres.2007.02.018. [CrossRef]

*International Journal of Computer Vision**,*91:251–261. doi:http://dx.doi.org/10.1007/s11263-010-0392-0. [CrossRef]

*, 4(8):539, http://www.journalofvision.org/content/4/8/539, doi:10.1167/4.8.539. [Abstract] [CrossRef] [PubMed]*

*Journal of Vision**, 11(5):2, 1–15, http://www.journalofvision.org/content/11/5/2, doi:10.1167/11.5.2. [PubMed] [Article] [CrossRef] [PubMed]*

*Journal of Vision**, 2(1):6, 79–104. http://www.journalofvision.org/content/2/1/6, doi:10.1167/2.1.6. [PubMed] [Article] [CrossRef]*

*Journal of Vision**Journal of the Royal Statistical Society*.

*Series A (General)**,*135(3):370–384. doi:10.2307/2344614. [CrossRef]

*Journal of the Optical Society of America A**,*5(4):598–605. doi:10.1364/JOSAA.5.000598. [CrossRef]

*Documenta Ophthalmologica**,*43(1):65–89. doi:10.1007/BF01569293. [CrossRef] [PubMed]

*Journal of Neurophysiology**,*88(1):455–463. [PubMed]

*The Journal of Physiology**,*558(3):717–728. doi:10.1113/jphysiol.2004.065771. [CrossRef] [PubMed]

*Network: Computation in Neural Systems**,*5(2):147–155. doi:10.1088/0954-898X/5/2/002. [CrossRef]

*, 2(1):7, 105–120. http://www.journalofvision.org/content/2/1/7, doi:10.1167/2.1.7. [PubMed] [Article] [CrossRef]*

*Journal of Vision*

*IEEE Trans. Pattern Analysis Machine Intelligence**,*12(12):1186–1190. [CrossRef]

*Network: Computation in Neural Systems**,*16(2–3):175–190. doi:10.1080/09548980500290047. [CrossRef]

*, 11(5):10, http://www.journalofvision.org/content/11/5/10, doi:10.1167/11.5.10. [PubMed] [Article] [CrossRef] [PubMed]*

*Journal of Vision*

*Vision Research**,*23(12):1465–1477. doi:10.1016/0042-6989(83)90158-X. [CrossRef] [PubMed]

*Vision Research**,*25(11):1661–1674. doi:16/0042-6989(85)90138-5. [CrossRef] [PubMed]

*Nature Neuroscience**,*5(9):839–840. [CrossRef] [PubMed]

*. 1019–1022). Karlsruhe, West Germany: Morgan Kaufmann Publishers Inc.*

*Proceedings of the Eighth International Joint Conference on Artificial intelligence*. Volume 2 (pp