On the first trial (
t = 1) of a learning block, the optimal Bayesian observer (see
Figure 2) calculates the posterior probability. However, because there is uncertainty about which of the signals is present for that block of learning trials, it computes the posterior probability for each of the
J possible signals (
J = 4 for the task in the present work). This is equivalent to computing a ratio of the likelihood of the data at the
ith location given signal presence (
P(
gi|
sj) and the likelihood of the data at the
ith location given signal absence (
P(
g|
n) (Green & Swets,
1966). The optimal observer then sums the likelihood ratios across signal types to compute a sum of weighted likelihoods for each location. The individual likelihood ratios are weighted by the prior expectation of each of the possible signals. On the first learning trial, the prior is 1/
J given that each signal has equal probability of being sampled. On trial
t, the location with the highest weighted sum of likelihoods (
SLRi,t) is chosen as containing the target:
where ℓ
i,t,j is the likelihood ratio of the data at location
i, for the
tth learning trial and for the
jth signal, and π
j,t is the weight (known as the prior) given to the likelihood of the
jth signal on the
tth trial. For white Gaussian noise, the likelihood ratio for each location and signal is given by (Peterson et al.,
1954):
where
sj is a column vector containing the
jth signal and
gi,t is a column vector containing the data at the
ith location for the
tth trial, and
Ej is the energy of the
jth signal (
Ej = s
jT s
j where the superscript
T stands for transpose). Note that s
jTgi,t can be thought of as the response of a linear sensor (matched to the
jth signal) and the data (
g) at the
ith location. Also
σ2 is the variance of the noise at each pixel.