** ***CORRECTIONS TO:* Geisler, W. S., Najemnik, J., & Ing, A. D. (2009). Optimal stimulus encoders for natural tasks.

*Journal of Vision, 9*(13):17, 1–16,

http://journalofvision.org/9/13/17/, doi:10.1167/9.13.17.

There is a mathematical error in our publication entitled “Optimal stimulus encoders for natural tasks” (

*Journal of Vision, 9*(13):17, 1–16). The error is in text

Equation 10 and

1, where

Equation 10 is derived. The error has no affect on the first example application concerning “image patch identification,” and only a minor effect on the second example application concerning “foreground identification.” Nonetheless, under some circumstances, the mathematical error might have more substantial consequences. Here we provide replacements for text on p. 5 concerning

Equation 10 and for

1.

**Replacement text on p. 5:**

Under the above assumptions and expanding

*p*(

*k*∣

**r** _{ q}(

*k, l*)) using Bayes rule (see

1) we have:

where

*n* _{ k} is the number of training samples from category

*k,* and

*Z* is a normalization factor. In keeping with the approximation in Equation 4, the logarithm of this formula gives the average relative entropy when the stimulus is

**s**(

*k, l*). Thus, Equations 5–

10 provide a closed-form expression for the average relative entropy of the posterior probability distribution (that the ideal observer computes) for arbitrary samples from the joint probability distribution of environmental categories and associated stimuli,

*p* _{0}(

*k, l*).

To estimate the optimal linear receptive fields we use a ‘greedy’ procedure. In other words, neurons are added to the population one at a time, with each neuron's receptive field being selected to produce the biggest decrease in decoding error. Specifically, we proceed sequentially by first finding the encoding function

*r* _{1}(

*k, l*) that minimizes

$ D \u2015$

_{1} (see Equation 5); then we substitute the estimated

*r* _{1}(

*k, l*) into

Equation 10 and find the encoding function

*r* _{2}(

*k, l*) that minimizes

$ D \u2015$

_{2}; then we substitute the estimated

*r* _{1}(

*k, l*) and

*r* _{2}(

*k, l*) into

Equation 10, and find the encoding function

*r* _{3}(

*k, l*) that minimizes

$ D \u2015$

_{3}, and so on.

**Replacement for Appendix A:**