We will not assume that the observer equally values hits and avoiding false alarms. Instead, much like in SDT, we allow for the possibility that the observer values hits more than avoiding false alarms or vice versa. Hence, in our models, we will use the following decision rule,
\begin{eqnarray}
d = \log {\frac{p(C=1 | {\mathbf{x}})}{p(C=0 | {\mathbf{x}})}} + \log {\frac{p_{{\rm present}}}{1 - p_{{\rm present}}}} \gt 0 ,\qquad\qquad
\end{eqnarray}
where
\(p_{{\rm present}}\) is the parameter that captures any bias towards reporting “target present,” and
\(d\) denotes the sum of the posterior ratio and the bias term. Using Bayes’s rule and taking logarithms, we have
\begin{eqnarray*}
\log {\frac{p(C=1 | {\mathbf{x}})}{p(C=0 | {\mathbf{x}})}} & = & \log {\frac{p({\mathbf{x}} | C=1)}{p({\mathbf{x}} | C=0)}} + \log {\frac{p(C=1)}{p(C=0)}} \\
& = & \log {\frac{p({\mathbf{x}} | C=1)}{p({\mathbf{x}} | C=0)}} ,
\end{eqnarray*}
where the second line follows from
Equation 1. Hence, the optimal observer will report “target present” when
\begin{eqnarray}
d = \log {\frac{p({\mathbf{x}} | C=1)}{p({\mathbf{x}} | C=0)}} + \log {\frac{p_{{\rm present}}}{1 - p_{{\rm present}}}} \gt 0 . \qquad\qquad
\end{eqnarray}
Assuming that there is at most one target and that measurement noise at different locations is independent, it has been shown that the log-likelihood ratio is given by (
Ma et al., 2011;
Palmer et al., 2000)
\begin{eqnarray}
\log {\frac{p({\mathbf{x}} | C=1)}{p({\mathbf{x}} | C=0)}} = \log {\left( \frac{1}{N} \sum _{i=1}^N e^{d_i} \right)} , \qquad\qquad\quad\;
\end{eqnarray}
where
\(N\) indicates the total number of Gabor patches in the display, and
\(d_i\) indicates the local log-likelihood ratio for location
\(i\). The local log-likelihood ratio is defined as
\begin{eqnarray}
d_i = \log {\frac{p(x_i | T_i = 1)}{p(x_i | T_i = 0)}} . \qquad\qquad\qquad\qquad\qquad\;\;
\end{eqnarray}
Marginalizing over
\(s_i\) and substituting in expressions from the generative model, we find
\begin{eqnarray*}
d_i & = & \log \frac{\int {p(x_i | s_i)p(s_i | T_i = 1)ds}}{\int {p(x_i | s_i)p(s_i | T_i = 0)ds}}\\
& = & \log \frac{\mathrm{VM}(x_i ; 0, \kappa )}{\int {\mathrm{VM}(x_i ; s_i, \kappa )\mathrm{VM}(s_i ; \mu , \kappa _s)ds}} .
\end{eqnarray*}
The denominator in this expression is the product of two von Mises distributions.
Murray and Morgenstern (2010) state that the product of two von Mises is a new, scaled, von Mises. Any von Mises distribution, integrated over all angles, gives 1, because it is a probability distribution. Hence, when we integrate over all
\(s_i\), we will only be left with the scaling. Using the formula from
Murray and Morgenstern (2010), we have
\begin{eqnarray*}
d_i & = & \log \frac{\mathrm{VM}(x_i ; 0, \kappa )}{\frac{I_0\big (\sqrt{{\kappa }^2 + {\kappa _s}^2 + 2\kappa \kappa _s\cos {(x_i - \mu )}}\big )}{2\pi I_0(\kappa )I_0(\kappa _s)}} .
\end{eqnarray*}
Substituting in the definition of a von Mises distribution, rearranging, and using the fact that for both distractor distributions in our experiment,
\(\mu =0\), we find
\begin{eqnarray}
d_i & = & \kappa \cos (x_i) + \log \frac{I_0(\kappa _s)}{I_0\Big (\sqrt{{\kappa }^2 + {\kappa _s}^2 + 2\kappa \kappa _s\cos {(x_i)}}\Big )}. \;\;\quad
\end{eqnarray}
For the case of uniform distractors,
\(\kappa _s = 0\), and we have,
\begin{equation*}
d_i = \kappa \cos (x_i) - \log (I_0(\kappa )) .
\end{equation*}
Substituting these expressions into
Equation 13 will give us the log-likelihood ratio. In turn, using the log-likelihood ratio in (
12) gives us the optimal observer’s decision rule. That is, it tells us, for any combination of measurements
\({\mathbf{x}}\), what the optimal observer would do.