The ideal observer's performance was measured by computer simulations in each stimulus condition that the human observers were tested in (Dynamic, Static, and Shuffled). The ideal decision rule for our task and stimuli was derived using Bayes' rule and is similar in principle to other tasks involving 1-of-N recognition (e.g., Gold, Bennett, & Sekuler,
1999a; Tjan et al.,
1995). In our experiment, observers were asked to determine the expression,
Ei (where
i refers to the
ith of
r possible expressions), that was most likely to have appeared within the noisy stimulus data,
D. According to Bayes' rule, the a posteriori probability of
Ei having been presented given
D can be expressed as
For our task and stimuli, the prior probability of seeing any given expression,
P(
Ei), and the normalizing factor
P(
D) are both constant for all
Ei, and thus can be removed without changing the relative ordering of
P(
Ei|D). Therefore, the ideal observer chooses the expression that maximizes
P(
D|Ei). For the case where there are
m possible faces for each expression shown in additive Gaussian white noise, the ideal observer must compute this probability for all
m possible faces within each expression category (all of which are equally probable) and compute the summed probability across faces, resulting in the following probability function:
where
n is the total number of pixels in the entire stimulus (i.e., all pixels of all 30 frames) and
σ is the standard deviation of the Gaussian distribution from which the external noise was generated. The ideal decision rule is to choose the expression
Ei that maximizes this function.