Open Access
Article  |   July 2019
What limits search for conjunctions of simple visual features?
Author Affiliations
  • Endel Põder
    Institute of Psychology, University of Tartu, Tartu, Estonia
    [email protected]
  • Maciej Kosiło
    Department of Psychology, City University London, London, UK
Journal of Vision July 2019, Vol.19, 4. doi:https://doi.org/10.1167/19.7.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Endel Põder, Maciej Kosiło; What limits search for conjunctions of simple visual features?. Journal of Vision 2019;19(7):4. https://doi.org/10.1167/19.7.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Despite of decades of research, we still do not know for sure the roles of internal noise, attention, and crowding in search for conjunctions of simple visual features. In this study, we tried several modifications to the classic design of conjunction-search experiments. In order to match exactly the proportions of simple features, two different targets were presented in target-present trials—vertical red and horizontal blue bars among vertical blue and horizontal red distractors. Both the length of the bars and the number of objects in a display were varied. Positions of objects were selected for minimal crowding effects. Exposure duration was 60 ms, and proportion correct was used as the measure of performance. For conjunction search, the results rejected the unlimited-capacity model and were consistent with limited-capacity attentional processing, and with the Naka–Rushton transform of the target–distractor difference. Qualitatively the same results were obtained when bar length was fixed, and fine orientation difference was used to manipulate target–distractor discriminability. An experiment of feature (orientation) search produced results close to the unlimited-capacity model.

Introduction
Visual search for conjunctions of simple features has been an important experiment in the history of vision research (for a relatively recent review, see Quinlan, 2003). Treisman and Gelade (1980) found that reaction times in searching for conjunction targets increase with increasing numbers of objects in a display, consistent with a serial search. Reaction times for single-feature targets, however, were almost independent of the number of objects, apparently consistent with parallel processing. These results led to the well-known feature-integration theory of attention. 
However, further studies revealed that several feature conjunctions can be searched for in parallel (Nakayama & Silverman, 1986). Wolfe, Cave, and Franzel (1989) found that even search for the conjunctions used by Treisman and Gelade (1980) can be very efficient and apparently not consistent with serial model. Still, these authors did not question the fundamental difference between simple features and their conjunctions, or the role of attention in feature integration. 
A well-known alternative theory was proposed by Duncan and Humphreys (1989). Their original idea was that search efficiency can be explained by target–distractor and distractor–distractor similarity. There are necessarily two different types of distractors in classic conjunction search, while simple-feature search needs just one type of distractor. Therefore, the variance of distractors could explain the lower efficiency of conjunction search. During an extensive argument, both parties modified their original accounts and designed increasingly complex experiments (Treisman, 1991, 1992; Duncan & Humphreys, 1992). However, the results are rather inconclusive. 
Using reaction time as the measure of performance has several complications. First, there is no theoretical slope of response time versus set size that would necessarily point to serial search. Second, it is well known that serial and limited-capacity parallel models can mimic each other (e.g., Townsend, 1971), and there is no simple way to distinguish between the two. Furthermore, reaction-time studies need to control for possible speed/accuracy trade-offs. Last, using simple serial and parallel search models might not be appropriate because of the possibility of eye movements. 
Experiments using brief presentation times, and proportion of correct responses as a measure of performance, might overcome at least some of these problems. The results of these experiments can be analyzed within the framework of signal-detection theory (SDT; Shaw, 1980; Palmer, Verghese, & Pavel, 2000). 
Within this framework, the capacity limitations of visual processing are of main interest. The simplest SDT model assumes that the signal-to-noise ratio of discrimination of the items is independent of their number, and a slight drop of performance with increasing set size is explained by the integration of multiple noisy percepts in (almost) ideal decision making. Search for a simple-feature target among homogeneous distractors usually fits this model (Palmer, Ames, & Lindsey, 1993; Palmer, 1994). 
Using proportion correct as a measure of performance, Eckstein and colleagues (Eckstein, 1998; Eckstein et al., 2000) have demonstrated that the unlimited-capacity SDT model can fit conjunction-search data as well, which is at odds with traditional accounts of conjunction search (e.g., Treisman & Gelade, 1980). Eckstein's models assume suboptimal integration of signals from two feature dimensions and predict reduced performance in conjunction search as compared to feature search across all set sizes (given that target–distractor feature differences are the same for both conditions). Importantly, these studies suggest that a decrease in performance in conjunction search can be explained without the need to invoke limited-capacity or serial models. However, predictions for set-size effects are somewhat ambiguous, depending on the overall level of performance, the terms of its measurement, and the range of set sizes. 
An important property of these models is a functional relationship between the set-size effect and overall level of performance. By adjusting target–distractor discriminability, it should be easy to produce the same levels of performance and consequently the same set-size effects for feature and conjunction search. Many results from search experiments really do appear to support the idea of larger set-size effects for lower levels of performance. However, there are also examples of larger set-size effects co-occurring with the same (or even better) performance, as measured in a minimum set-size condition (e.g., Treisman, 1991; McElree & Carrasco, 1999). There have been only a few studies using an SDT paradigm similar to Eckstein's. Still, McLean (1999) and Põder (2017) have found somewhat larger set-size effects in conjunction search compared to feature search. 
A couple of studies have measured proportion correct as dependent on set size without direct application of SDT-based models. Carrasco and Yeshurun (1998) used a brief presentation of stimuli and measured both proportion correct and reaction time. The observed set-size effects indicated a limited processing capacity for conjunction search. McElree and Carrasco (1999) used a speed/accuracy trade-off procedure to compare feature and conjunction searches. Their results reject serial processing models for both conditions. While feature search was consistent with unlimited capacity, conjunction search exhibited several indications of capacity limitation. In particular, the set-size effect in terms of d′ was significantly larger for conjunction as compared to feature search despite better overall performance in conjunction search. This result apparently contradicts Eckstein's (1998) explanation of differences between feature and conjunction search. However, both of these studies used insufficient control for crowding effects. 
Huang and Pashler (2005) used a paradigm of simultaneous versus successive presentation and found no significant capacity limitation for conjunction search. This is consistent with Eckstein's results. 
It has been known for a long time that interitem distance plays an important role in conjunction search (Cohen & Ivry, 1991). Pelli, Palomares, and Majaj (2004) have proposed that conjunction errors are caused not by the limitations of spatial attention but by crowding effects. This would be the case when several objects fall within the same integration field, whose size is determined by the eccentricity only. Neri and Levi (2006) followed a similar idea, varying the distance between the items in a search display to determine a threshold distance of 82% correct, for different set sizes in the fovea and at 7° eccentricity. The results were accounted for by a disarray between color and orientation feature maps. According to their model, this crowdinglike mechanism also explained the larger set-size effects observed with conjunction targets. However, interobject distance was varied together with the size of the objects, which complicates theoretical interpretation. Also, the extent of spatial disarray used in the model was much less than the spatial uncertainty for a given eccentricity implied by traditional crowding studies (Bouma, 1970; Andriessen & Bouma, 1976). 
In early studies of visual search, the role of eye movements was frequently overlooked. It was believed that covert shifts of attention are nearly equivalent to eye movements, and restriction of these does not affect the nature of visual search (e.g., Klein & Farrell, 1989). Also, in traditional reaction-time experiments, the spatial positions of stimuli have only a minor effect because eye movements can undo any difficulties caused by initial retinal positions. More recent studies have argued that a large part of the regularities of visual search can be explained by accurate modeling of low-level limitations of peripheral vision and regularities of eye movements (Geisler & Chou, 1995). 
Rosenholtz and colleagues (Rosenholtz, Huang, & Ehinger, 2012; Rosenholtz, Huang, Raj, Balas, & Ilie, 2012) have proposed that the efficiency of visual search may be fully explained by local processes within a classic zone of visual crowding combined with eye movements, and that limited-capacity attentional processing is unnecessary. However, this proposal was based on experiments with crowded displays only. This theory cannot predict any strong set-size effects in conditions without crowding. 
Several researchers have noticed that conjunction search can be carried out in two steps (Egeth, Virzi, & Garbart, 1984; Kaptein, Theeuwes, & van der Heijden, 1995). For example, when searching for a vertical red bar, an observer can select all red bars first and then search for a vertical among the red ones. Exact predictions of this type of model depend on assumed properties of the underlying mechanisms—for example, the selection may take some time or be partly incorrect. Gobell, Tseng, and Sperling (2004) have proposed a model where the subset selection is limited by the spatial resolution of attention. According to this model, a target positioned near the center of a group of items with the same color should be detected more easily than one surrounded by items of a different color. With random positions, this model predicts a drop in performance with increasing set size, because a larger number of items needs finer allocation of attention to select the subset with the correct color. 
In this study, we attempt to better understand the mechanisms of search for feature conjunctions within the SDT framework. In our experiments, we use a brief exposure duration, and proportion correct as the measure of performance (Shaw, 1980; Bergen & Julesz, 1983; Eckstein, 1998). 
We extend Eckstein's studies in several ways. Instead of contrast-orientation conjunction, we use a color-orientation one that has been used in several classic studies of conjunction search. We study a much more extensive range of set sizes, from two to 24. Because of the limitations of spatial attention (Kröse & Julesz, 1989), relevant set-size cueing used by Eckstein (1998) could not be applicable for these set sizes. To minimize crowding effects, we use a special distribution of stimuli in a display (Intriligator & Cavanagh, 2001) that holds all interitem distances above critical. If crowding is an important factor of the previously observed capacity limitations (Rosenholtz, Huang, & Ehinger, 2012; Rosenholtz, Huang, Raj, et al. 2012), we should not find capacity limitations in our study. To control eccentricity effects, we sample spatial positions from the same range of eccentricity for every set size. 
While Eckstein's studies measured performance for one target–distractor difference only, we tried to gather more detailed data and used a full range of target–distractor differences. We fixed target–distractor discriminability along one feature dimension (color) at a high level and varied another (orientation) to build psychometric functions. 
It is difficult to create target-present and target-absent displays for classic conjunction search without any differences at the feature level. To avoid possible feature cues, we used a method with two conjunction targets (Neri & Levi, 2006) that holds feature distributions the same for target-present and target-absent trials. Note that the presence of a second target does not make the task more complex or considerably different. It is sufficient to choose just one target and search for that. 
In summary, our study attempted to determine whether simple SDT-based models introduced for conjunction search by Eckstein (1998) generalize to different experimental paradigms, to different feature conjunctions, and to a broader range of experimental conditions. Finding differences, we tried to modify the simplest models. 
Methods
Participants
Eight participants (three women, five men; median age = 28 years) took part in Experiment 1 (conjunction search) and five (three women, two men; median age = 29 years) in Experiment 2 (feature search). Four participants took part in both experiments. Five participants took part in Experiment 3 (conjunction search with fine orientation differences) and three in Experiment 4 (conjunction search, modified spatial segregation). All participants had normal or corrected-to-normal vision. The research adhered to the tenets of the Declaration of Helsinki. 
Apparatus
Experiments were programmed in Microsoft Visual Basic (Version 6.0). Stimuli were presented on a CRT monitor with a refresh rate of 85 Hz and a resolution of 1,024 × 768 resolution. Stimuli were presented on a background of light gray (approximately 50 cd/m2). 
Stimuli
Stimuli for Experiments 1 and 2 consisted of horizontal and vertical bars presented simultaneously on the screen (Figure 1A and 1B). The length of the bars and the set size (i.e., number of bars) were varied between conditions. The bars could be 4, 5, 6, 7, 9, 13, or 17 pixels in length. The width of the bars was 3 pixels. Set size could be 2, 4, 8, 16, or 24. There were equal numbers of horizontal and vertical as well as red and blue bars in each display. Standard red and blue colors of Visual Basic were used. 
Figure 1
 
Examples of stimuli for conjunction search. Experiment 1: (A) horizontal and vertical bars (targets: red vertical and blue horizontal), set size 24, bar length 13 and (B) set size 4, bar length 5. (C) Experiment 3: tilted bars (targets: right-tilted red and left-tilted blue), set size 12, tilt 10°. Experiment 4: (D) spatially segregated and (E) spatially nonsegregated displays. All examples are with targets present.
Figure 1
 
Examples of stimuli for conjunction search. Experiment 1: (A) horizontal and vertical bars (targets: red vertical and blue horizontal), set size 24, bar length 13 and (B) set size 4, bar length 5. (C) Experiment 3: tilted bars (targets: right-tilted red and left-tilted blue), set size 12, tilt 10°. Experiment 4: (D) spatially segregated and (E) spatially nonsegregated displays. All examples are with targets present.
In the conjunction-search experiment, participants were required to indicate whether the targets (red vertical bar or blue horizontal bar) were present in the trial. The targets were present with a probability of 0.5. Distractors were red horizontal and blue vertical bars. 
In the feature-search experiment, participants were required to determine whether a vertical bar (of any color) was present during the trial. Two (red and blue) vertical bars were present in target-present trials. The distractors were all horizontal bars of both colors. 
Viewing distance was approximately 50 cm. The smallest stimuli (4 pixels) corresponded to 0.1° of visual angle, and the longest (17 pixels) to 0.4° of visual angle. The bars were presented around the fixation point in three rings. Maximum eccentricity was approximately 6°. The distances to the nearest neighbour was not less than 0.6° of eccentricity for all stimuli in a display. A dark cross indicating the fixation point was permanently present. Stimuli were presented for 60 ms. 
Stimuli in Experiment 3 were bars with a width of 3 and a length of 17 pixels (0.07° × 0.4°.). Equal numbers of bars were tilted clockwise and counterclockwise from the vertical, by the same degree (Figure 1C). The tilt was varied from 1° to 30°. 
The methods of Experiment 4 were identical to those of Experiment 1 except that a special positioning algorithm was used to maximize the spatial-segregation variability of different colors (for details and rationale, see Experiment 4: Spatial resolution of attention?). 
Set size and bar length (Experiments 1, 2, and 4) and absolute tilt (Experiment 3) were fixed within blocks of trials. Participants ran 50 to 100 trials per each combination of these main independent variables. For practical reasons, we did not run every possible combination of set size and bar length or set size and tilt, since some combinations had predictable perfect or chance performance. 
Modeling
We follow general ideas of SDT applied to visual search (Palmer et al., 2000). In our model, we allow an effect of encoding-capacity limitations (McLean, 1999; Mazyar, Van den Berg, & Ma, 2012). We assume that discriminability of a single item depends on the number of items in a display (set-size) according to a power function:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}d_{1n}^{\prime} = {{d_1^{\prime} } \over {{n^{{b \over 2}}}}}{\rm {,}}\end{equation}
where Display Formula\({d^{\prime} _1}\) is discriminability for set size 1, n is set size, and b is a measure of the set-size effect: b = 0 for unlimited capacity (independent processing of items) and b = 1 for fixed capacity (sample size).  
To predict search performance, we must specify a decision model. It computes the participant's d′ for a search task as a function of the local signal-to-noise ratio (Display Formula\({d^{\prime} _{1n}}\)) and set size:  
\begin{equation}{d^{\prime} _n} = f\left( {{{d^{\prime} }_{1n}},n} \right){\rm {.}}\end{equation}
 
We used an ideal decision model (e.g., Mazyar et al., 2012). For each trial, this model calculates the likelihoods of the observed signals under the hypotheses of target present and target absent and selects the one with the higher total likelihood. Assuming equal priors, Gaussian noise, and selecting internal variables xD = −0.5 and xT = 0.5 for distractor and target, the log-likelihood ratio (target present/target absent) for a single trial is  
\begin{equation}L = \log {1 \over n}\sum\limits_{i = 1}^n {{e^{{{{x_i}} \over {{\sigma ^2}}}}}} {\rm {,}}\end{equation}
where xi is a noisy internal variable for object i and σ is the standard deviation of the noise. The ideal model selects the response “target present” when L > 0 and “target absent” otherwise.  
To simplify the fitting procedure, we used a polynomial approximation of simulation results for the appropriate range of parameters (Põder, 2017):  
\begin{equation}{d^{\prime} _n} = {{{{d^{\prime} }_{1n}}_{}} \over {{n^{0.40 - 0.35\log {{d^{\prime} }_{1n}}_{} - 0.22{{\log }^2}{{d^{\prime} }_{1n}}}}}}{\rm {.}}\end{equation}
 
Assuming that in our experiments the participants searched a one-color subset of items for one target only, we used half of the actual set size as n in model fitting. 
It is a usual assumption that the internal representation of a relevant stimulus parameter is not necessarily proportional to its physical counterpart. Frequently, this relationship has been expressed as a power function (Palmer et al., 2000). In some contrast-discrimination studies, more complex sigmoid curves have been used (e.g., Foley, 1994; Foley & Schwarz, 1998). We found that our conjunction-search data can be accurately fitted by a Naka–Rushton function (Naka & Rushton, 1966):  
\begin{equation}d_1^{\prime} = {d_m}{{{x^k}} \over {{x^k} + {c^k}}}{\rm {,}}\end{equation}
where dm is an asymptotic response, x is the physical target–distractor difference (length − width of the stimulus bar), c is a semisaturation constant (x value where the response is equal to half of the asymptote), and k is the slope at that point. Thus, our full search model has four free parameters: the capacity-limitation exponent b and three parameters of the Naka–Rushton function (dm, c, k). This model was compared with two simpler ones: an unlimited-capacity model with exponent b = 0 and a limited-capacity model with a power-transducer function  
\begin{equation}d_1^{\prime} = w{x^k}\end{equation}
instead of a Naka–Rushton one. Both simpler models have three free parameters.  
The models predict d′ values for a yes/no visual-search task. The predicted d′ values were converted into predictions of unbiased proportion correct:  
\begin{equation}{P_c} = \Phi \left( {{{{{d^{\prime} }_n}} \over 2}} \right){\rm {,}}\end{equation}
where Φ is the standard normal distribution function (Macmillan & Creelman, 1991). The behavior of the model as dependent on different parameters is illustrated in Supplementary Figure S1.  
Our individual data sets consisted of 21 to 36 proportions correct. Microsoft Excel Solver was used to find the maximum-likelihood parameters of a model by minimizing the likelihood-ratio statistic  
\begin{equation}G = 2\mathop \sum \limits_i {O_i} \cdot ln\left( {{{{O_i}} \over {{E_i}}}} \right){\rm {,}}\end{equation}
where Oi is the observed and Ei the predicted count in cell i.  
To control for possible effects of decision bias, we repeated our model fits using unbiased proportions correct:  
\begin{equation}{P_c} = \Phi \left[ {{{z(H) - z(F)} \over 2}} \right]{\rm {,}}\end{equation}
where H and F are the proportions of hits and false alarms, Φ is the standard normal distribution function, and z is the inverse of the normal distribution function.  
Results
Experiments 1 and 2: Feature and conjunction search
Results (averaged across participants) for conjunction search (Experiment 1) and feature search (Experiment 2) are depicted in Figure 2A and 2B. It is apparent that the patterns are very different for the two tasks. For feature search, performance improves quickly with increasing bar length and approaches 100% correct for every set size. For conjunction search, improvement is slower and more dependent on set size. The curves for larger set size do not appear to reach 100% for any possible bar length. 
Figure 2
 
Results averaged across participants for three experiments. (A) Conjunction search with horizontal and vertical bars, Experiment 1; (B) feature search with horizontal and vertical bars, Experiment 2; and (C) conjunction search with tilted bars, Experiment 3. The error bars represent the standard error of the mean.
Figure 2
 
Results averaged across participants for three experiments. (A) Conjunction search with horizontal and vertical bars, Experiment 1; (B) feature search with horizontal and vertical bars, Experiment 2; and (C) conjunction search with tilted bars, Experiment 3. The error bars represent the standard error of the mean.
Examples of individual results and model fits are shown in Figure 3 (data for all participants are given in Supplementary Figures S2S4). Conjunction-search data were very well fitted by a limited-capacity SDT model combined with a Naka–Rushton transducer function (Table 1). The power function was significantly worse for several participants. Note that calculation of statistical significance takes into account the number of free parameters used in the different models (four in the limited-capacity Naka–Rushton model, three in the two simpler models). 
Figure 3
 
Examples of individual results and model fits for (A) conjunction search and (B) feature search. Symbols indicate the experimental data; lines correspond to the best model fits.
Figure 3
 
Examples of individual results and model fits for (A) conjunction search and (B) feature search. Symbols indicate the experimental data; lines correspond to the best model fits.
Table 1
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 1
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
The fit parameters of the limited-capacity model with the Naka–Rushton function are given in Table 2. The exponent of capacity limitation varied from 0.44 to 1.3, with mean 0.83, standard deviation 0.12″ is better, not far from the prediction of the fixed-capacity (sample size) model (1.0). The unlimited-capacity model was reliably rejected for all participants. 
Table 2
 
Parameters of the limited-capacity Naka–Rushton model fitted to data from Experiments 1–3. Means across participants, with standard error of the mean.
Table 2
 
Parameters of the limited-capacity Naka–Rushton model fitted to data from Experiments 1–3. Means across participants, with standard error of the mean.
We fitted the same models to the data for feature (orientation) search (Table 3). On this task, the limited-capacity model was slightly better for one participant only; for all others, the assumption of limited capacity is apparently not needed. The exponent of capacity limitation was 0.27 for the first participant and 0 for the others (mean 0.05). Fits of the model with the Naka–Rushton transducer were better compared to the power function, similar to the conjunction-search results. The relatively poor fit of participant NK's data is likely explained by frequent attentional lapses, independent of target–distractor difference. All fit parameters are given in Table 2. While exponent k is not different from that of the conjunction-search experiment, the other two Naka–Rushton parameters seem to be smaller in the feature-search experiment. 
Table 3
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the orientation-search experiment. Statistically significant differences between the model and the data: *p < 0.05, **p < 0.01, ***p < 0.001.
Table 3
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the orientation-search experiment. Statistically significant differences between the model and the data: *p < 0.05, **p < 0.01, ***p < 0.001.
A possible role of response bias
An analysis of hits and false alarms revealed a relatively large variability of criterion across participants as well as experimental conditions. We attempted to control the criterion effects by calculating unbiased proportions correct, assuming equal-variance Gaussian distributions of signal and noise. The results of modeling (reported as Supplementary Tables S1S6) were qualitatively the same as with raw proportions correct. 
Theoretically, there is no good reason to suppose equal variance of decision variables between target-present and target-absent trials. Simple search models predict that the variance of target-present displays should be larger, and the present-to-absent variance ratio should depend on set size and target–distractor difference (Eckstein, 1998; Palmer et al., 2000). Our experimental results exhibit some systematic shifts of criterion that are consistent with these predictions: The proportion of “yes” responses decreases with increases of both set size and target–distractor difference (Figure 4, both effects significant with p < 0.01). Therefore, the observed “biases” may, at least partly, reflect optimal decision making in the standard paradigm of a yes/no visual search. Because our control of criterion had very small effects, we believe that response bias is unlikely to be an important factor in this study, and we report the results based on raw proportions correct in the main text. 
Figure 4
 
Systematic biases observed in conjunction search (proportions of “yes” responses as dependent on bar length and set size). Data from Experiment 1, averaged across participants.
Figure 4
 
Systematic biases observed in conjunction search (proportions of “yes” responses as dependent on bar length and set size). Data from Experiment 1, averaged across participants.
Experiment 3: Fine orientation differences
In this experiment, we tested the generality of the results from Experiment 1. We held the length of the bars constant and manipulated the target–distractor discriminability using orientation difference. Equal numbers of the bars were tilted either left or right in each trial; tilt was varied from 1° to 30°. The results were qualitatively similar to those from Experiment 1 (see Figure 2C). The fits to the models are given in Table 4. Again, unlimited-capacity models can be rejected for all participants, and Naka–Rushton is superior compared to the power function. The means of the fit parameters (see Table 2) were not significantly different from the values found in Experiment 1. 
Table 4
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment with fine orientation manipulation. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 4
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment with fine orientation manipulation. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Experiment 4: Spatial resolution of attention?
We tried to determine whether the pattern of the conjunction-search data is consistent with the spatial-resolution account of Gobell et al. (2004). If the observed set-size effects were caused by a limited spatial resolution of attention, there should be a strong effect of proximity of different-color bars to the targets. Performance should be the worst with targets surrounded by different-color items and the best with targets either surrounded by same-color items or having no other items in nearby positions. That effect should appear regardless of set size. For four participants in Experiment 1, the positions of the targets and distractors were recorded, and numbers of distractors with the same and different color surrounding the targets were determined for each trial. We calculated partial correlations between proportion correct and number of distractors of different color surrounding the target, controlling for the effects of set size and bar length. We did not observe any significant effects. 
Of course, it is possible that resolution of attention could affect rejection of distractors as well. Therefore, we calculated mean numbers of different-color items in the surround of all items in a display as a measure of requirement of attentional resolution. It appears that this measure predicts performance almost as well as set size. However, the probability of different-color items in nearby positions is highly correlated with set size in our data set (r = 0.96), which makes the separation of the effects of two variables virtually impossible. 
We attempted to clarify the issue further with an additional Experiment 4, where the variation of spatial segregation of different colors was increased. Methods were identical to those of Experiment 1 except that instead of distributing items of different colors randomly, we used a mixture of two strategies: the first trying to place every new item in proximity to items of a different color, and the second trying to place different colors farther apart. The choice between strategies was random (with equal probabilities) for each trial. Examples of the configurations produced are shown in Figure 1D and 1E. As a result, a large variation of spatial segregation of the colors within each set size was obtained, and the correlation between set size and the mean number of different-color items in adjacent positions was reduced to 0.54. 
The main properties of the data are illustrated in Figure 5, which shows the presence of two effects: a pure set-size effect that is independent of proximity of items with different colors, and a spatial-resolution effect that modifies performance within each set size. 
Figure 5
 
Proportions correct as dependent on set size and spatial segregation of items with different colors (in Experiment 4, for participant EP, only comparable data points based on the same range of bar lengths are presented). Error bars represent the standard error of proportion.
Figure 5
 
Proportions correct as dependent on set size and spatial segregation of items with different colors (in Experiment 4, for participant EP, only comparable data points based on the same range of bar lengths are presented). Error bars represent the standard error of proportion.
We tried several statistical methods to test the attention-resolution hypothesis from Gobell et al. (2004). For example, if the observed set-size effects were caused by a limited spatial resolution of attention, there should be even stronger effects of proximity of different-color bars in a display; and adding set size to this model should not improve the fits considerably. 
Results from a logistic regression are given in Table 5. These data indicate that set size has slightly stronger effect on performance as compared to the proximity of different-color items; and the model with both variables is significantly better than the model with proximity only. 
Table 5
 
Fits of logistic-regression models (Cox and Snell r2) with different sets of independent variables, for three participants. Dependent variable = correctness of response. All effects reported are significant with p < 0.001.
Table 5
 
Fits of logistic-regression models (Cox and Snell r2) with different sets of independent variables, for three participants. Dependent variable = correctness of response. All effects reported are significant with p < 0.001.
The results reject the strong hypothesis that set-size effect could be explained by the spatial proximity of different-color items. Still, the proximity of different colors appears to have its own effect on performance. The results were similar for all three participants, and different statistics led to the same conclusions. 
Discussion
In this study, we found a strong evidence for capacity limitations in visual search for conjunctions of simple features (color and orientation) using an experiment with brief exposure and using proportion correct as the measure of performance. No such limitations were found for simple-feature (orientation) search in similar conditions. The results are broadly consistent with classic studies using reaction time as the measure of performance (Treisman & Gelade, 1980; Treisman & Sato, 1990) and appear to contradict earlier studies applying SDT to conjunction search (Eckstein, 1998; Eckstein et al., 2000). We have no good explanation for these different findings. It is worth noting, however, that Eckstein formally compared unlimited-capacity and serial search models. Although our results reveal capacity limitations, they do not imply serial processing. Our estimate of the strength of capacity limitations falls between unlimited-capacity and serial search models, close to the prediction of the limited-capacity (sample size) model. We consider explanations that might account for different results in our studies. 
Eckstein (1998) used spatial cueing of relevant set size to control “sensory” effects like crowding. However, the results were similar for Eckstein et al. (2000), where such a control was not used. Also, Põder (2017) used (a slightly different) relevant-set-size manipulation and still found evidence for limited capacity in conjunction search. Therefore, it seems unlikely that low-level interactions could explain the difference. 
Different ranges of set size may play some role. Eckstein's studies used relatively restricted ranges of set sizes—4 to12 and 2 to 8—whereas we varied set size from 2 to 24. Some observations (Pashler, 1987) suggest that conjunction search may have different regularities above and below set size 8. 
A potential factor in the different results could be the choice of visual features for conjunction-search experiments. It has been an old challenge for feature-integration theory that several conjunctions behave differently from the others. A recent study (Põder, 2017) found different capacity limitations for orientation-color and size-color conjunction search using SDT. There are well-known asymmetries in feature search when target and distractor features are swapped (Treisman & Gormican, 1988). 
It is likely that certain target and distractor features could make a conjunction target more or less salient in different experiments. In the studies by Eckstein (1998) and Eckstein et al. (2000), the conjunction target had higher contrast and tilted orientation, the features that should be relatively salient among lower contrast and vertically oriented distractors (Braun, 1994; Carrasco, McLean, Katz, & Frieder, 1998). Huang and Pashler (2005) used a big vertical rectangle as the target, among smaller or horizontal distractors. Very likely, big is more salient than small. Target and distractor features could be more symmetrical in the present study (red vs. blue, horizontal vs. vertical, left-tilted vs. right-tilted). 
It is possible that different results can be (at least partly) explained by interindividual differences. Frequently, considerable differences across observers have been found in conjunction search (e.g., Wolfe et al., 1989). In the present study, some participants exhibited a capacity-limitation measure b < 0.5 that is closer to the unlimited- than the fixed-capacity model. Similarly, one out of three observers in Eckstein's (1998) study exhibited a considerable extent of capacity limitation (b ≈ 0.4, according to our estimation). 
Several studies have reported big changes in the course of practicing a given search task. After a few days of training, originally large set-size effects for conjunction search (in terms of reaction-time slopes) may decrease to nearly zero, consistent with parallel search (Ellison & Walsh, 1998; Lobley & Walsh, 1998). Learning effects are different for different conjunctions, and learning rates are different across observers. In our study, a majority of participants ran only a small number of practice trials before the experiments. We do not have this information about Eckstein's observers. It seems possible that some combination of these factors could explain the differences between our results. But further studies are needed to confirm that. 
An unusual feature of our experiments was using two complementary targets to remove possible feature cues in conjunction search (Neri & Levi, 2006). While this method guarantees the exact balance of features between target-present and target-absent trials, it may cause other problems because of different effects of the second target in feature- and conjunction-search conditions. However, we have run pilot experiments with a single target as well, for both feature and conjunction search. We did not observe any systematic differences between one-target and two-target versions. Therefore, the possible effects must be small and could hardly affect our main findings. 
A novel aspect of our model was using a Naka–Rushton transducer instead of an ordinary linear or power function (Naka & Rushton, 1966; Albrecht & Hamilton, 1982; May & Solomon, 2015) combined with a general SDT-based search model. However, the choice of transducer function was not a prerequisite of demonstrating capacity limitations, since the limited-capacity power function also fitted the data reasonably well. Nevertheless, the Naka–Rushton function helped to fit otherwise very different data sets by the same model. When compared to traditional models of visual search, some observers exhibit performance similar to serial search (lower asymptotes for larger set sizes) while others are more consistent with noise-limited (e.g., sample size) models. A Naka–Rushton transducer makes it possible to account for both patterns with a single model. At present, we cannot confirm whether it reflects a true neurobiological mechanism or mimics something we really do not understand. 
The present results cannot be explained by the usual crowding-based models. All interobject distances were well above critical distances of crowding for a given eccentricity. Of course, there are ideas of crowding at multiple levels (e.g., Manassi & Whitney, 2018) that, in principle, allow interactions beyond the traditional critical distance. This type of modeling may equate global capacity limitations with crowding zones covering the full visual field; we should then study the properties of this high-level crowding. 
Could heterogeneity of distractors explain different set-size effects for conjunction and feature searches in this study (as proposed by Duncan & Humphreys, 1989)? At first glance, this looks unlikely. In our orientation-search experiment, color was varied as well. Because color differences were much more salient than orientation differences, the variance along the color dimension should dominate in both experiments, and total variance in a display could not be very different between two conditions. 
Rosenholtz (1999, 2001) has proposed a more advanced saliency model for visual search with heterogeneous distractors. Saliency is calculated as the Mahalanobis distance between the target and set of distractors in a feature space and takes into account the variance and covariance of distractors in that space. When applied to a classic conjunction display, it resembles Eckstein's (1998) linear model, predicting lower performance for conjunction as compared to feature search when feature differences are the same. It does not explain set-size effects found in our study. 
Still, in a feature-search condition, there is a feature (orientation) singleton in a display among the items of homogeneous orientation. In a conjunction-search condition, both instances of both features are distributed in equal numbers across the display. Some bottom-up saliency models (e.g., Li, 2002) can spot the feature singleton in a feature-search display regardless of variance along the other feature dimension but do not prefer any combination of features in a conjunction display. This might explain different capacity limitations for feature versus conjunction searches. 
There are several two-step theories applicable to search for feature conjunctions (Di Lollo, Kawahara, Zuvic, & Visser, 2001; Huang & Pashler, 2007). Usually these theories account for worse performance in conjunction as compared to feature search by an additional step of subset selection. However, an additional step, or switching costs, do not explain the observed set-size effects—a drop in performance dependent on the number of items. We must assume that either subset selection is less efficient for larger set sizes, or simple-feature search within a subset has limited capacity (different from the same search over the full set of items). 
One possible explanation for lower selection efficiency with increasing set size is a limited spatial resolution of attention (Gobell et al., 2004). That model suggests that the observed set-size effects can be a consequence of a denser mixing of relevant and irrelevant items in a display for larger set sizes. Therefore, it should predict even stronger effects of proximity of items with different colors. Although our data reveal effects of proximity, these are too small to explain the considerably stronger effects of set size. It is possible that more efficient use of spatial segregation requires top-down preparation. Olds, Cowan, and Jolicoeur (1999) have found that unexpected grouping can make search performance even worse. However, the main mechanism of set-size effects in this study seems to be related not to spatial resolution but rather to a global capacity of attention. 
Conclusions
This study has combined several improvements to the classical conjunction- and feature-search design, such as controlling for crowding effects by introducing appropriate distances between stimuli and using proportion correct rather than reaction time as a measure of performance. The latter allowed us to analyze the data within the SDT framework. Unlike earlier studies using the SDT paradigm, the present results demonstrate that search for a conjunction of features (color and orientation) has limited-capacity characteristics, in line with the classic account of visual-search experiments. Different results may reflect a variable efficiency of processing of different feature conjunctions, and differences across participants. An additional novel aspect of this study is the use of a Naka–Rushton transducer, which helped to fit visual-search data better than traditional linear or power functions. 
Acknowledgments
We would like to thank all participants for their time and effort. EP and MK were supported by the Estonian Research Council grant PUT663, and EP was supported by institutional research funding IUT20-40 of Estonian Ministry of Education and Research. 
Commercial relationships: none. 
Corresponding author: Endel Põder. 
Address: Institute of Psychology, University of Tartu, Tartu, Estonia. 
References
Albrecht, D. G., & Hamilton, D. B. (1982). Striate cortex of monkey and cat: Contrast response function. Journal of Neurophysiology, 48, 217–237.
Andriessen, J. J., & Bouma, H. (1976). Eccentric vision: Adverse interactions between line segments. Vision Research, 16, 71–78.
Bergen, J. R., & Julesz, B. (1983, June 23). Parallel versus serial processing in rapid pattern discrimination. Nature, 303, 696–698.
Bouma, H. (1970, April 11). Interaction effects in parafoveal letter recognition. Nature, 226, 177–178.
Braun, J. (1994). Visual search among items of different salience: Removal of visual attention mimics a lesion in extrastriate area V4. The Journal of Neuroscience, 14, 554–567.
Carrasco, M., McLean, T. L., Katz, S. M., & Frieder, K. S. (1998). Feature asymmetries in visual search: Effects of display duration, target eccentricity, orientation and spatial frequency. Vision Research, 38, 347–374.
Carrasco, M., & Yeshurun, Y. (1998). The contribution of covert attention to the set-size and eccentricity effects in visual search. Journal of Experimental Psychology: Human Perception and Performance, 24, 673–692.
Cohen, A., & Ivry, R. (1991). Density effects in conjunction search: Evidence for a coarse location mechanism of feature integration. Journal of Experimental Psychology: Human Perception and Performance, 17, 891–901.
Di Lollo, V., Kawahara, J., Zuvic, S., & Visser, T. A. W. (2001). The preattentive emperor has no clothes: A dynamic redressing. Journal of Experimental Psychology: General, 130 (3), 479–492.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458.
Duncan, J., & Humphreys, G. W. (1992). Beyond the search surface: Visual search and attentional engagement. Journal of Experimental Psychology: Human Perception and Performance, 18, 578–588.
Eckstein, M. P. (1998). The lower visual search efficiency for conjunctions is due to noise and not serial attentional processing. Psychological Science, 9, 111–118.
Eckstein, M. P., Thomas, J. P., Palmer, J., & Shimozaki, S. S. (2000). A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays. Perception and Psychophysics, 62 (3), 425–451.
Egeth, H. E., Virzi, R. A., & Garbart, H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance, 10, 32–39.
Ellison, A., & Walsh, V. (1998). Perceptual learning in visual search: Some evidence of specificities. Vision Research, 38, 333–345.
Foley, J. M. (1994). Human luminance pattern-vision mechanisms: Masking experiments require a new model. Journal of the Optical Society of America A, 11, 1710–1719.
Foley, J. M., & Schwarz, W. (1998). Spatial attention: Effect of position uncertainty and number of distractor patterns on the threshold-versus-contrast function for contrast discrimination. Journal of the Optical Society of America A, 15, 1036–1047.
Geisler, W. S., & Chou, K. L. (1995). Separation of low-level and high-level factors in complex tasks: Visual search. Psychological Review, 102 (2), 356–378.
Gobell, J., Tseng, C. H., & Sperling, G. (2004). The spatial distribution of visual attention. Vision Research, 44, 1273–1296.
Huang, L., & Pashler, H. (2005). Attention capacity and task difficulty in visual search. Cognition, 94, B101–B111.
Huang, L., & Pashler, H. (2007). A Boolean map theory of visual attention. Psychological Review, 114, 599–563.
Intriligator, J., & Cavanagh, P. (2001). The spatial resolution of visual attention. Cognitive Psychology, 43, 171–216.
Kaptein, N. A., Theeuwes, J., & van der Heijden, A. H. C. (1995). Search for a conjunctively defined target can be selectively limited to a color-defined subset of elements. Journal of Experimental Psychology: Human Perception and Performance, 21, 1053–1069.
Klein, R. M., & Farrell, M. (1989). Search performance without eye movements. Perception & Psychophysics, 46 (5), 476–482.
Kröse, B., & Julesz, B. (1989). The control and speed of shifts of attention. Vision Research, 29, 1607–1619.
Li, Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Sciences, 6, 9–16.
Lobley, K., & Walsh, V. (1998). Perceptual learning in visual conjunction search. Perception, 27 (10), 1245–1255.
Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user's guide. New York: Cambridge University Press.
Manassi, M., & Whitney, D. (2018). Multi-level crowding and the paradox of object recognition in clutter. Current Biology, 28, 127–133.
May, K. A., & Solomon, J. A. (2015). Connecting psychophysical performance to neuronal response properties II: Contrast decoding and detection. Journal of Vision, 15 (6): 9, 1–21, https://doi.org/10.1167/15.6.9. [PubMed] [Article]
Mazyar, H., Van den Berg, R., & Ma, W. J. (2012). Does precision decrease with set size? Journal of Vision, 12 (6): 10, 1–16, https://doi.org/10.1167/12.6.10. [PubMed] [Article]
McElree, B., & Carrasco, M. (1999). The temporal dynamics of visual search: Evidence for parallel processing in feature and conjunction searches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1517–1539.
McLean, J. E. (1999). Processing capacity of visual perception and memory encoding (Unpublished doctoral dissertation). University of Washington, Seattle, WA.
Naka, K. I., & Rushton, W. A. H. (1966). S-potentials from colour units in the retina of fish (Cyprinidae). Journal of Physiology, 185, 536–555.
Nakayama, K., & Silverman, G. H. (1986, March 20). Serial and parallel processing of visual feature conjunctions. Nature, 320, 264–265.
Neri, P., & Levi, D. M. (2006). Spatial resolution for feature binding is impaired in peripheral and amblyopic vision. Journal of Neurophysiology, 96, 142–153.
Olds, E. S., Cowan, W. B., & Jolicoeur, P. (1999). Spatial organization of distractors in visual search. Canadian Journal of Experimental Psychology, 53, 150–159.
Palmer, J. (1994). Set-size effects in visual search: The effect of attention is independent of the stimulus for simple tasks. Vision Research, 34, 1703–1721.
Palmer, J., Ames, C. T., & Lindsey, D. T. (1993). Measuring the effect of attention on simple visual search. Journal of Experimental Psychology: Human Perception and Performance, 19, 108–130.
Palmer, J., Verghese, P., & Pavel, M. (2000). The psychophysics of visual search. Vision Research, 40, 1227–1268.
Pashler, H. (1987). Detecting conjunctions of color and form: Reassessing the serial search hypothesis. Perception & Psychophysics, 41, 191–201.
Pelli, D. G., Palomares, M., & Majaj, N. J. (2004). Crowding is unlike ordinary masking: Distinguishing feature detection and integration. Journal of Vision, 4 (12): 12, 1136–1169, https://doi.org/10.1667/4.12.12. [PubMed] [Article]
Põder, E. (2017). Combining local and global limitations of visual search. Journal of Vision, 17 (4): 10, 1–12, https://doi.org/10.1167/17.4.10. [PubMed] [Article]
Quinlan, P. T. (2003). Visual feature integration theory: Past, present, and future. Psychological Bulletin, 129, 643–673.
Rosenholtz, R. (1999). A simple model predicts a number of motion popout phenomena. Vision Research, 39, 3157–3163.
Rosenholtz, R. (2001). Visual search orientation among heterogeneous distractors: Experimental results and implications for signal-detection theory models of search. Journal of Experimental Psychology: Human Perception and Performance, 27 (4), 985–999.
Rosenholtz, R., Huang, J., & Ehinger, K. A. (2012). Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology, 3, 13, https://doi.org/10.3389/fpsyg.2012.00013.
Rosenholtz, R., Huang, J., Raj, A., Balas, B. J., & Ilie, L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of Vision, 12 (4): 14, 1–17, https://doi.org/10.1167/12.4.14. [PubMed] [Article]
Shaw, M. L. (1980). Identifying attentional and decision-making components in information processing. In Nickerson R. S. (Ed.), Attention and performance VIII (pp. 277–296). Hillsdale, NJ: Erlbaum.
Townsend, J. T. (1971). A note on the identifiability of parallel and serial processes. Perception & Psychophysics, 10 (3), 161–163.
Treisman, A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652–676.
Treisman, A. (1992). Spreading suppression or feature integration? A reply to Duncan and Humphreys. Journal of Experimental Psychology: Human Perception and Performance, 18, 589–593.
Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136.
Treisman, A., & Gormican, S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review, 95, 15–48.
Treisman, A., & Sato, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Perception and Performance, 16, 459–478.
Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration theory of attention. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433.
Figure 1
 
Examples of stimuli for conjunction search. Experiment 1: (A) horizontal and vertical bars (targets: red vertical and blue horizontal), set size 24, bar length 13 and (B) set size 4, bar length 5. (C) Experiment 3: tilted bars (targets: right-tilted red and left-tilted blue), set size 12, tilt 10°. Experiment 4: (D) spatially segregated and (E) spatially nonsegregated displays. All examples are with targets present.
Figure 1
 
Examples of stimuli for conjunction search. Experiment 1: (A) horizontal and vertical bars (targets: red vertical and blue horizontal), set size 24, bar length 13 and (B) set size 4, bar length 5. (C) Experiment 3: tilted bars (targets: right-tilted red and left-tilted blue), set size 12, tilt 10°. Experiment 4: (D) spatially segregated and (E) spatially nonsegregated displays. All examples are with targets present.
Figure 2
 
Results averaged across participants for three experiments. (A) Conjunction search with horizontal and vertical bars, Experiment 1; (B) feature search with horizontal and vertical bars, Experiment 2; and (C) conjunction search with tilted bars, Experiment 3. The error bars represent the standard error of the mean.
Figure 2
 
Results averaged across participants for three experiments. (A) Conjunction search with horizontal and vertical bars, Experiment 1; (B) feature search with horizontal and vertical bars, Experiment 2; and (C) conjunction search with tilted bars, Experiment 3. The error bars represent the standard error of the mean.
Figure 3
 
Examples of individual results and model fits for (A) conjunction search and (B) feature search. Symbols indicate the experimental data; lines correspond to the best model fits.
Figure 3
 
Examples of individual results and model fits for (A) conjunction search and (B) feature search. Symbols indicate the experimental data; lines correspond to the best model fits.
Figure 4
 
Systematic biases observed in conjunction search (proportions of “yes” responses as dependent on bar length and set size). Data from Experiment 1, averaged across participants.
Figure 4
 
Systematic biases observed in conjunction search (proportions of “yes” responses as dependent on bar length and set size). Data from Experiment 1, averaged across participants.
Figure 5
 
Proportions correct as dependent on set size and spatial segregation of items with different colors (in Experiment 4, for participant EP, only comparable data points based on the same range of bar lengths are presented). Error bars represent the standard error of proportion.
Figure 5
 
Proportions correct as dependent on set size and spatial segregation of items with different colors (in Experiment 4, for participant EP, only comparable data points based on the same range of bar lengths are presented). Error bars represent the standard error of proportion.
Table 1
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 1
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 2
 
Parameters of the limited-capacity Naka–Rushton model fitted to data from Experiments 1–3. Means across participants, with standard error of the mean.
Table 2
 
Parameters of the limited-capacity Naka–Rushton model fitted to data from Experiments 1–3. Means across participants, with standard error of the mean.
Table 3
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the orientation-search experiment. Statistically significant differences between the model and the data: *p < 0.05, **p < 0.01, ***p < 0.001.
Table 3
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the orientation-search experiment. Statistically significant differences between the model and the data: *p < 0.05, **p < 0.01, ***p < 0.001.
Table 4
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment with fine orientation manipulation. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 4
 
Goodness of fit (likelihood-ratio statistic G) of three models to individual data from the conjunction-search experiment with fine orientation manipulation. Statistically significant differences between the model and the data: *p < 0.05, ***p < 0.001.
Table 5
 
Fits of logistic-regression models (Cox and Snell r2) with different sets of independent variables, for three participants. Dependent variable = correctness of response. All effects reported are significant with p < 0.001.
Table 5
 
Fits of logistic-regression models (Cox and Snell r2) with different sets of independent variables, for three participants. Dependent variable = correctness of response. All effects reported are significant with p < 0.001.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×