February 2010
Volume 10, Issue 2
Free
Research Article  |   February 2010
Corrections to: Optimal stimulus encoders for natural tasks
Author Affiliations
  • Wilson S. Geisler
    Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USAhttp://www.cps.utexas.edu[email protected]
  • Jiri Najemnik
    Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA[email protected]
  • Almon D. Ing
    Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA[email protected]
Journal of Vision February 2010, Vol.10, 27. doi:https://doi.org/10.1167/10.2.27
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wilson S. Geisler, Jiri Najemnik, Almon D. Ing; Corrections to: Optimal stimulus encoders for natural tasks. Journal of Vision 2010;10(2):27. https://doi.org/10.1167/10.2.27.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
CORRECTIONS
CORRECTIONS TO: Geisler, W. S., Najemnik, J., & Ing, A. D. (2009). Optimal stimulus encoders for natural tasks. Journal of Vision, 9(13):17, 1–16, http://journalofvision.org/9/13/17/, doi:10.1167/9.13.17. 
There is a mathematical error in our publication entitled “Optimal stimulus encoders for natural tasks” ( Journal of Vision, 9(13):17, 1–16). The error is in text Equation 10 and 1, where Equation 10 is derived. The error has no affect on the first example application concerning “image patch identification,” and only a minor effect on the second example application concerning “foreground identification.” Nonetheless, under some circumstances, the mathematical error might have more substantial consequences. Here we provide replacements for text on p. 5 concerning Equation 10 and for 1
Replacement text on p. 5: 
Under the above assumptions and expanding p( kr q( k, l)) using Bayes rule (see 1) we have:  
p ( k | r q ( k , l ) ) = 1 Z j = 1 n k ( t = 1 q σ t ( k , j ) ) 1 exp [ 1 2 t = 1 q [ r t ( k , l ) r t ( k , j ) ] 2 σ t ( k , j ) 2 ]
(10)
where n k is the number of training samples from category k, and Z is a normalization factor. In keeping with the approximation in Equation 4, the logarithm of this formula gives the average relative entropy when the stimulus is s( k, l). Thus, Equations 5– 10 provide a closed-form expression for the average relative entropy of the posterior probability distribution (that the ideal observer computes) for arbitrary samples from the joint probability distribution of environmental categories and associated stimuli, p 0( k, l). 
To estimate the optimal linear receptive fields we use a ‘greedy’ procedure. In other words, neurons are added to the population one at a time, with each neuron's receptive field being selected to produce the biggest decrease in decoding error. Specifically, we proceed sequentially by first finding the encoding function r 1( k, l) that minimizes
D
1 (see Equation 5); then we substitute the estimated r 1( k, l) into Equation 10 and find the encoding function r 2( k, l) that minimizes
D
2; then we substitute the estimated r 1( k, l) and r 2( k, l) into Equation 10, and find the encoding function r 3( k, l) that minimizes
D
3, and so on. 
Replacement for Appendix A: 
Appendix A
Here we derive formulas for the posterior probability distribution that is computed by the ideal Bayesian observer when receiving a population response R q( k, l) to a presentation of stimulus s( k, l). (Keep in mind that the ideal observer does not know that the stimulus is s( k, l), but does know the mean response of each neuron in the population to each stimulus in the training set.) According to Bayes' rule:  
p ( x | R q ( k , l ) ) = p ( R q ( k , l ) | x ) p ( x ) i = 1 m p ( R q ( k , l ) | i ) p ( i )
 
To derive text Equation 10 we expand the above equation using the definition of conditional probability:  
p ( x | R q ( k , l ) ) = p ( x ) j = 1 n x p ( R q ( k , l ) | x , j ) p ( j | x ) i = 1 m p ( i ) j = 1 n i p ( R q ( k , l ) | i , j ) p ( j | i )
 
Given the assumed statistical independence of the neural noise we have,  
p ( x | R q ( k , l ) ) = p ( x ) j = 1 n x t = 1 q p ( R t ( k , l ) | x , j ) p ( j | x ) i = 1 m p ( i ) j = 1 n i t = 1 q p ( R t ( k , l ) | i , j ) p ( j | i )
 
Assuming that the samples are representative of the natural world, the prior probability of a category is the fraction of training samples from the category, and the prior probability of a particular sample from a category is the inverse of the number of training samples within the category: p( i) = n i/ n, p( ji) = 1/ n i. Thus,  
p ( x | R q ( k , l ) ) = j = 1 n x t = 1 q p ( R t ( k , l ) | x , j ) i = 1 m j = 1 n i t = 1 q p ( R t ( k , l ) | i , j )
(A1)
 
By substitution of Equation 8 we have:  
p ( x | R q ( k , l ) ) = j = 1 n x ( t = 1 q σ t ( x , j ) ) 1 exp [ 1 2 t = 1 q [ R t ( k , l ) r t ( x , j ) ] 2 σ t ( x , j ) 2 ] i = 1 m j = 1 n i ( t = 1 q σ t ( i , j ) ) 1 exp [ 1 2 t = 1 q [ R t ( k , l ) r t ( i , j ) ] 2 σ t ( i , j ) 2 ]
(A2)
 
Equation 10 follows by substitution from Equation 4. Note that Z in Equation 10 corresponds to the denominator of Equation A2. In other words, the complete equation is  
p ( k | r q ( k , l ) ) = j = 1 n k ( t = 1 q σ t ( k , j ) ) 1 exp [ 1 2 t = 1 q [ r t ( k , l ) r t ( k , j ) ] 2 σ t ( k , j ) 2 ] i = 1 m j = 1 n i ( t = 1 q σ t ( i , j ) ) 1 exp [ 1 2 t = 1 q [ r t ( k , l ) r t ( i , j ) ] 2 σ t ( i , j ) 2 ]
(A3)
 
Citation
Geisler W. S. Najemnik J. Ing A. D. (2010). Corrections to: Optimal stimulus encoders for natural tasks. Journal of Vision, 10, (2):27, 2, http://journalofvision.org/10/2/27/, doi:10.1167/10.2.27. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×