A common finding in oddity search, a search in which the target is unknown but defined to be different from the distractors, is that human performance remains insensitive or even improves with number of distractors (set size). A number of explanations based on perceptual and attentional mechanisms have been proposed to explain the anomalous set-size effect. Here, we consider whether the shallower set-size function for oddity search could be explained by stimulus information and task demands. We developed an ideal-observer and a difference-coding (standard-deviation) model for single-fixation oddity search and compared it to the ideal observer in the standard target-known search as well as to human performance in both search tasks. Performance for the ideal and difference-coding model in the oddity search resulted in a shallower set-size function than the target-known ideal observer and was a good predictor of human search accuracy. However, the ideal-observer model was a better predictor than the standard-deviation model for 10 of the 12 data sets. The results highlight the importance of using ideal-observer analysis to separate contributions to human performance arising from perceptual/attentional mechanisms inherent to the human brain from those contributions arising from differences in stimulus information associated with the tasks.

*N*elements in the alternative will be the target, the ideal observer has to consider all possible scenarios and, thus, sum the likelihoods across all the mutually exclusive events of each element within an alternative being the target. To illustrate the strategy for the task considered, we outline the ideal-observer strategy for a target-known exactly search for an example with only two possible distractor distributions.

*v*

_{1}–

*v*

_{4}, Figure 3). It then creates a vector of those values from each of the alternatives (

**x**

_{1}and

**x**

_{2}, Figure 3). It then uses Equation 1 (which is a more general equation than is necessary for this example, as it accommodates any number of

*M*intervals and

*D*distractor types; see the 1 for derivation) to compute the likelihood of observing those values given a particular interval (

*l*) contains the target. It compares the hypothesized distractor intervals to vectors of mean distractor values (

**d**

_{ j}) and the hypothesized target interval to vectors of mean target values (

**t**

_{ ij}). (T refers to the transpose matrix operation.) Assuming that the probability density functions are Gaussian, we obtain

*D*distractor distributions (

*j*) as well as all

*N*possible target locations (

*i*). The sums of these likelihoods (

*v*

_{1}–

*v*

_{4}, Figure 5). In this case, the summed likelihoods, computed using Equation 2 (

*V*types), as well as all distractor types (

*V*−1 types) and locations (note the extra rows for

*T*

_{1}and

*T*

_{2}in Figure 5). After this step, the model is identical to a target-known search; it picks the alternative with the greater summed likelihood as the one that contains the target.

*v*

_{1}–

*v*

_{4}, Figure 6). It next computes the standard deviation of these values from each of the alternatives (

*d*′) increases. At high values of

*d*′, the standard-deviation model approximates the ideal observer for the oddity-search task.

*N*distributions without replacement. In a typical oddity search, the number of distributions could potentially be infinite; in our task,

*N*= 6. It is important to understand how the number of possible target distributions affects the predictions made by these models. The effect of adding more distributions is illustrated in Figure 7.

*d*′, to refer to the distance from one potential target distribution to the adjacent distribution ( Figure 8). To investigate the effect of number of possible target/distractor distributions while controlling for the average pairwise index of detectability, we equated the average

*d*′ (i.e., expected distance between a target and a distractor across trials). If we were to have simply equated

*d*′ across adjacent distributions, performance would increase due to an increasing average

*d*′.

^{1}Thus, we generated model simulations for increasing number of distributions but equated the average

*d*′.

*N*) increases, the performance of the ideal-observer oddity model is increasingly better approximated by the standard-deviation model. It is important to note that the shallowness of the set-size function for the oddity task (for both ideal-observer and standard-deviation models) and the steepness of the function for the target-known ideal observer remain relatively invariable to the number of distributions.

^{2}and the “white” luminance set to 50.00 cd/m

^{2}. The study was conducted in a dark room. Observers were at an approximate distance of 55 cm from the display, resulting in a subtended angle of 0.034° per pixel.

^{2}. To reduce location uncertainty, we placed four small squares adjacent to each dot; these squares subtended 0.1° × 0.1° of visual angle, had a luminance of 7.8 cd/m

^{2}, and were located 0.79° up, down, left, and right from the center of the dot (see Figures 1A and 1B). There were 4, 8, 12, or 16 dots in each display. They were placed at locations chosen from 20 possible equidistant locations along the circumference of an imaginary circle at an eccentricity of 9.08° from a central fixation point. Each dot was placed adjacent to another dot, except that half the dots were displaced from the other half by skipping one of the possible locations (see Figure 1). The skipped location created a gap that was used to distinguish between the two alternatives. No observers reported difficulty in perceptually segregating the two groups of dots forming the two alternatives. The dots were placed next to each other to ensure that each of the alternatives had equal item density regardless of set size. On each trial, the 20 possible locations were randomly rotated along the imaginary circle by a random overall rotation (the rotation amount was chosen from a uniform distribution of 1–360°). Each Gaussian's contrast was one of six possible contrasts (−0.548, −0.352, −0.116, 0.116, 0.352, 0.548) perturbed by independent contrast noise with a standard deviation of 0.18. The noise standard deviation was chosen so that the signal-to-noise ratio for the stimuli resulted in behavioral performance in the range of 70–90% correct. Contrast values both above and below the background luminance were chosen to allow for observers to perceptually discriminate the six different contrast values. For each trial, two contrasts were chosen: one was the target contrast and the other was the distractor contrast. Only one of the Gaussians displayed had the target contrast; the rest had the distractor contrast. Each set size, target contrast, distractor contrast, and target location were equally likely to be used on every trial. It is important to note that means of the contrast distributions were not equally spaced (i.e., −0.3, −0.2, −0.1, 0.1, 0.2, 0.3). The ideal-observer model calculations for this search task assumed that these distributions are equally spaced; however, we spaced the distributions unevenly for humans to control for the effect described by Weber's law. This effect posits that the just-noticeable difference between two stimuli increases with intensity of the stimulus. Thus, to achieve equivalent perceptual discrimination across neighboring contrasts, we increased the contrast differences as the element contrasts increased. If we had used equally spaced contrasts, the perceptual discriminability for human observers would have been unequal.

*p*< .05). Observer performance for the target-known condition followed the typical set-size effect (Baldassi & Verghese, 2002; Cave & Wolfe, 1990; Eckstein et al., 2000; Monnier & Nagy, 2001; Palmer, 1995; Treisman & Gelade, 1980) with percentage correct decreasing by about 5% on average as the set size increases from 2 to 8. Comparatively, the set-size effects for the oddity-search condition are much shallower with percentage correct from Set Size 2 to Set Size 8 decreasing on average by 2%.

*d*′ (illustrated in Figure 10), was used to fit all models to the human data.

Contrast | Orientation | |||
---|---|---|---|---|

Observer | Ideal target known/ideal oddity | Observer | Ideal target known/ideal oddity | |

AFC | W.S. | 0.3175 | W.S. | 0.0412 |

D.K. | 1.8203 | D.K. | 0.3959 | |

C.L. | 1.1074 | B.P. | 9.5204* | |

IFC | W.S. | 0.5413 | W.S. | 3.8059* |

J.C. | 0.8438 | J.C. | 1.2454 | |

T.S. | 1.2399 | T.S. | 3.1382* |

Contrast | Orientation | |||
---|---|---|---|---|

Observer | Ideal target known/standard deviation oddity | Observer | Ideal target known/standard deviation oddity | |

AFC | W.S. | 1.2624 | W.S. | 0.5228 |

D.K. | 1.3513 | D.K. | 1.4618 | |

C.L. | 1.6676 | B.P. | 13.0028* | |

IFC | W.S. | 0.7389 | W.S. | 2.9424* |

J.C. | 2.0394 | J.C. | 1.6453 | |

T.S. | 2.5502 | T.S. | 5.6136* |

*p*< .01).

*T*targets and the distractors could be one of

*D*distractors (with the added constraint that the target will never be the same as the distractor). Previous ideal-observer models for single-fixation search have been proposed to predict ideal performance in three of the four boxes in Table 3. Here, we extended the modeling efforts to an ideal-observer model for the target-known statistically/distractor known statistically case.

Distractor knowledge | Target knowledge | |
---|---|---|

Exact | Statistical | |

Exact | Bochud et al., 2004; Burgess & Ghandeharian, 1984; Palmer, 1995; Swenson & Judy, 1981 | Baldassi & Burr, 2004; Cameron, Tai, Eckstein, & Carrasco, 2004; Judy et al., 1997; Solomon & Morgan, 2001; Zhang et al., 2004 |

Statistical | Rosenholtz, 2001 | Bacon & Egeth, 1991; Blough, 1989; Monnier & Nagy, 2001; Santhi & Reeves, 2004 |

**x**

_{1}…

**x**

_{ M}) and chooses the alternatives with the highest posterior probability. Observed data are the contrast or orientation values of each of the items in the display for this experiment. The posterior probability at the

*l*th alternative can be related to the likelihood of the data in all alternatives given target presence in the

*l*th alternative, through Bayes' rule (Green & Swets, 1966; Peterson, Birdsall, & Fox, 1954). The posterior probability of a given alternative,

*l,*containing the target is given as a normalized product of the likelihood of the observed data,

*P*(

**x**

_{1}…

**x**

_{M}∣

*l*), and the prior probability of that alternative

*P*(

*l*)

*P*(

**x**

_{1}…

**x**

_{M}) is the probability of the data, which acts as a normalization factor for the posterior probability. Because it is independent of

*l,*it can be neglected without affecting the outcome of the decisions. In this work, each alternative is equally likely to contain the target, and hence,

*P*(

*l*) is the same for all

*l,*and can also be neglected, leaving the decision completely determined by the likelihood term. Thus, the alternative chosen by the ideal observer,

*F*

_{1}…

*F*

_{ V}as the set of mean feature values of the possible target and distractor distributions. The mean feature values were uniformly spaced over a range of contrast or orientation, with spacing Δ

*f*(therefore,

*F*

_{ i}=

*F*

_{1}+ (

*i*− 1) Δ

*f*). Each observed feature value is normally distributed about a mean feature value (the particular mean feature value is described below) with a common variance,

*σ*

^{2}. Observed feature values were controlled by the spacing of the mean feature values and the standard deviation of the noise. For the human observer experiments, the spacing and standard deviation were set to give a reasonable level of task difficulty and perceived variability in the stimuli (values are described in the Methods section), while avoiding any issues of saturation (contrast) or wraparound (orientation). For convenience, the ideal observer was computed with the variance fixed at 1, and difficulty was controlled solely through the distance between the feature values. Performance of the ideal observer is equivalent for equal values of the spacing divided by the noise standard deviation. This ratio is

*d*′ in Figure 8.

*V,*was fixed at six. On each trial, two different mean feature values were selected: one as the mean value of the target and one as the mean value of the distractors. For the alternative containing the target stimuli,

**t**, the observed feature value of one item is generated by adding noise to the mean target feature value and the remaining items are generated using the distractor mean with the addition of noise. Nontarget alternatives,

**d**, are generated using the distractor feature value alone with noise added. The following example represents the mean values of the target and distractor alternatives when the target is at the first position, the set size is 4, the target feature is

*F*

_{6}, and the distractor feature is

*F*

_{1}. The expected value of the target alternative would be

**t**= [

*F*

_{6},

*F*

_{1},

*F*

_{1},

*F*

_{1}], and the expected values of the distractor alternative(s) would be

**d**= [

*F*

_{1},

*F*

_{1},

*F*

_{1},

*F*

_{1}]. The probability distribution describing the observed values in a particular alternative (

**x**) is the multivariate normal density where

**μ**is a vector of expected values (i.e., [

*F*

_{6},

*F*

_{1},

*F*

_{1},

*F*

_{1}])

**Σ**to model statistical dependencies in the observed values.

**x**

_{ l}, to a vector that contains expected values for the target alternative (

**t**). The remaining alternatives are compared to a vector that contains the expected values for the distractor (

**d**). For the moment, let us consider the

**t**and

**d**vectors to be fixed (this requirement is relaxed below to accommodate uncertainty within an alternative). Because each alternative is independent, we can write the conditional distribution of all alternative data as the product of likelihood of alternative

*l*given the target vector and the likelihood of the remaining alternatives given the distractor vector

*N,*the set size within each alternative). We now address this uncertainty in the target and distractor vectors.

*N*locations within an alternative. This results in

*N*target vectors that contain the target at different locations. We therefore add a subscript to the target vector,

**t**

_{ i}(

*i*= 1, …,

*N*), where

*i*indicates the location of the target within the alternative. The likelihood that a particular alternative contains the target is the sum of the likelihoods of the target across all locations in the alternative. This generalizes Equation A4 to

*D*), the model needs to sum across all possible distractor features in addition to the possible target locations. This adds another subscript to both the target vector variable,

**t**

_{ ij}, and to our distractor vector variable,

**d**

_{ j}. For example, for a given target feature value of

*F*

_{Targ}, the target vector

**t**

_{12}= [

*F*

_{Targ},

*F*

_{2},

*F*

_{2},

*F*

_{2}], and distractor vectors are represented by

**d**

_{2}= [

*F*

_{2},

*F*

_{2},

*F*

_{2},

*F*

_{2}]. When distractor uncertainty is accounted for, the likelihood in Equation A5 is generalized to

*F*

_{Targ}is known to the observer. For an oddity search, there is also uncertainty as to the target feature value. We incorporate this additional uncertainty by adding an additional subscript,

*k,*to the target vector, which indicates the feature value of the target. For an alternative with four possible locations, a feature vector

**t**

_{213}= [

*F*

_{1},

*F*

_{3},

*F*

_{1},

*F*

_{1}]. Note that the feature value of the target cannot be the same as the distractor, and hence,

*k*≠

*j*. The target and distractor feature values are otherwise independent, and hence, any pairing of feature values as target/distractor is equally likely although it is straightforward to generalize this as well. The model must now sum across all target feature values (

*V*values) in addition to the other components shown in Equation A6,

*l*(they are essentially constants). Removing these terms from the equation results in

*M*alternatives, and again, the model chooses the alternative with the greatest value.

^{1}If there are six equally spaced distributions with a distance of 1

*d*′ unit between adjacent distributions, then the target will most often differ from the distractor by 1

*d*′ unit; the maximum difference is 5

*d*′ units, and the average

*d*′ will be 2.33

*d*′ units. If there are 32 distributions, each separated by a distance of 1

*d*′ unit, the target will still most often differ from the distractor by 1

*d*′ unit, but the maximum difference is now 31

*d*′ units, and the average

*d*′ would be 11

*d*′ units. This would lead to higher ideal-observer performance in the 32-distribution case than in the 6-distribution case because there are more “easy” trials.

*Journal of Experimental Psychology: Human Perception and Performance*, 17, 77–90. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 44, 1227–1233. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 2, (8):3, 559–570, http://journalofvision.org/2/8/3/, doi:10.1167/2.8.3. [PubMed] [Article] [CrossRef]

*Vision Research*, 18, 637–650. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Animal Behavior Processes*, 15, 14–22. [PubMed] [CrossRef] [PubMed]

*Medical Physics*, 31, 24–36. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 2955–2966. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 51, 465–472. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and image science*, 1, 906–910. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 17, 295–325. [PubMed] [CrossRef] [PubMed]

*Cognitive Psychology*, 22, 225–271. [PubMed] [CrossRef] [PubMed]

*Psychological Science*, 9, 111–118. [CrossRef]

*Journal of Vision*, 4, (12):3, 1006–1019, http://journalofvision.org/4/12/3/, doi:10.1167/4.12.3. [PubMed] [Article] [CrossRef]

*Journal of the Optical Society of America A, Optics, image science, and vision*, 15, 2406–2419. [PubMed] [CrossRef]

*Perception & Psychophysics*, 62, 425–451. [PubMed] [CrossRef] [PubMed]

*Nature*, 402, 176–178 [PubMed] [CrossRef] [PubMed]

*Signal detection theory and psychophysics*. New York: Krieger.

*Proceedings SPIE image perception*(vol. 3036, pp. 39–47). Bellingham, WA: SPIE Press.

*Vision Research*, 24, 1977–1990. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 549–568. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 41, 313–328. [PubMed] [CrossRef] [PubMed]

*Nature*, 434, 387–391. [PubMed] [CrossRef] [PubMed]

*Journal of the Acoustical Society of America*, 41, 497–505. [CrossRef]

*Vision Research*, 34, 1703–1721. [PubMed] [CrossRef] [PubMed]

*Current Directions in Psychological Science*, 4, 118–123. [CrossRef]

*Vision Research*, 40, 1227–1268. [PubMed] [CrossRef] [PubMed]

*Attention*. Hove, East Sussex, UK: Psychology Press.

*Transactions of the IRE Professional Group on Information Theory*, 4, 171–212. [CrossRef]

*Journal of Experimental Psychology: Human Perception and Performance*, 27, 985–999. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 44, 1235–1256. [PubMed] [CrossRef] [PubMed]

*Attention and Performance VIII*. (pp. 277–269). Hillsdale, NJ: Erlbaum.

*Attention and performance X*. (pp. 106–121). Hillsdale, NJ: Erlbaum.

*Journal of Vision*, 1, (1):2, 9–17, http://journalofvision.org/1/1/2/, doi:10.1167/1.1.2. [PubMed] [Article] [CrossRef]

*Perception & Psychophysics*, 29, 521–534. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 3053–3069. [PubMed] [CrossRef] [PubMed]

*Cognitive Psychology*, 12, 97–136. [PubMed] [CrossRef] [PubMed]

*Neuron*, 31, 523–535. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 13, 1739–1753. [PubMed] [CrossRef] [PubMed]

*IEEE Transactions on Medical Imaging*, 23, 459–474. [PubMed] [CrossRef] [PubMed]