Free
Article  |   April 2014
Features in visual search combine linearly
Author Affiliations
Journal of Vision April 2014, Vol.14, 6. doi:https://doi.org/10.1167/14.4.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R. T. Pramod, S. P. Arun; Features in visual search combine linearly. Journal of Vision 2014;14(4):6. https://doi.org/10.1167/14.4.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search.

Introduction
Visual search in real life often consists of a target differing in multiple features from the distracters. Although the question of which individual features guide search has been studied extensively (Wolfe & Horowitz, 2004), we know relatively little about how these features combine together. For example, consider two searches in which the oddball target differs in either length (Figure 1A) or orientation (Figure 1B) from the distracters. What happens when the target differs in both length and orientation (Figure 1C)? How do length and orientation combine? It can be seen that search for the multiple feature condition is faster than the individual feature conditions, suggesting that both features play a role. Is there a quantitative relationship between multiple feature search and single feature searches? This question is related to two lines of research in the literature, which we review below. 
Figure 1
 
Example displays with targets differing in multiple or single features from the distracters. (A) A target differing only in length, (B) only in orientation, and (C) in both length and orientation from the distracters (ΔLr = 0.47 and ΔO = 15°). Subjects had to press a key to indicate the side of the array (left or right) on which the target appeared. Average search times across subjects (mean ± SEM) are shown below the displays. For all example displays shown, the actual displays were white items against a black background. (D) Schematic model for visual search. In this model, a salience signal arises at every location in the visual field that accumulates with time. An oddball target is detected when the salience signal at that location reaches threshold. According to this simple model, the product of the salience signal and reaction time (RT) equals the threshold. Conversely, the reciprocal of RT is proportional to the salience signal. (E) In the race model, the multiple feature search sets off separate accumulators for the two features, and a response is produced when the first accumulator reaches threshold. Thus the reaction time is the minimum of the two individual reaction times. (F) In the co-activation model, salience signals for length and orientation sum together and the net salience signal then accumulates to a threshold.
Figure 1
 
Example displays with targets differing in multiple or single features from the distracters. (A) A target differing only in length, (B) only in orientation, and (C) in both length and orientation from the distracters (ΔLr = 0.47 and ΔO = 15°). Subjects had to press a key to indicate the side of the array (left or right) on which the target appeared. Average search times across subjects (mean ± SEM) are shown below the displays. For all example displays shown, the actual displays were white items against a black background. (D) Schematic model for visual search. In this model, a salience signal arises at every location in the visual field that accumulates with time. An oddball target is detected when the salience signal at that location reaches threshold. According to this simple model, the product of the salience signal and reaction time (RT) equals the threshold. Conversely, the reciprocal of RT is proportional to the salience signal. (E) In the race model, the multiple feature search sets off separate accumulators for the two features, and a response is produced when the first accumulator reaches threshold. Thus the reaction time is the minimum of the two individual reaction times. (F) In the co-activation model, salience signals for length and orientation sum together and the net salience signal then accumulates to a threshold.
Relation to studies of dissimilarity
In classic studies of perceptual dissimilarity, subjects are asked to rate the dissimilarity between (say) a pair of objects differing in more than one feature as well as the dissimilarity between objects differing in each of the individual features (Attneave, 1950; Shepard, 1964; Hyman & Well, 1967; Lockhead & King, 1977; Dunn, 1983). This question has strong parallels to visual search: If the ease of search is determined by the target-distracter dissimilarity (Duncan & Humphreys, 1989), then it follows that the relationship between a multiple feature search and the corresponding single-feature searches is determined by the underlying dissimilarity relations. 
In the classic studies, dissimilarity ratings between objects differing in multiple features were found to be a linear combination of dissimilarity ratings for objects differing in single features. In more technical terms, dissimilarities combined according to a weighted city-block metric. This was observed for a variety of features, including circle size and diameter orientation (Dunn, 1983), size and brightness of rectangles (Attneave, 1950; Wiener-Ehrlich, 1978), and the size and tilt of parallelograms (Attneave, 1950; Dunn, 1983). However, squared dissimilarity ratings were found to combine linearly for other pairs of features, such as the length and width of a rectangle (Wender, 1971; Krantz & Tversky, 1975; Wiener-Ehrlich, 1978; Dunn, 1983), its area and shape (Krantz & Tversky, 1975; Wiener-Ehrlich, 1978), or the value and chroma of Munsell color chips (Hyman & Well, 1967, 1968). In technical terms, these dissimilarities combined according to a Euclidean distance metric. A concordant pattern of results emerged in Garner's speeded classification tasks (Garner & Felfody, 1970; Gottwald & Garner, 1972) in which subjects were asked to classify objects based on one feature while ignoring variations in the other feature. Features such as circle size and diameter orientation (which combined linearly in dissimilarity) showed no interference and were termed separable. In contrast, features such as rectangle length and width (which combined in a Euclidean manner in dissimilarity) showed strong interference effects and were termed integral (Garner & Felfody, 1970; Gottwald & Garner, 1972; Felfody, 1974; Cheng & Pachella, 1984; Potts, Melara, & Marks, 1998). 
Based on this evidence, it is widely believed that separable features can be classified independently and combine linearly, whereas integral features interfere in classification and combine in a Euclidean manner (Shepard, 1987). It must be emphasized that although most of these studies have assumed that dissimilarity ratings follow either a city-block or Euclidean metric distance, whether at all there is a metric structure to perceptual space has been contested and alternative models have been proposed for how features interact (Tversky, 1977; Tversky & Gati, 1982). Notably, in the feature contrast model proposed by Tversky, objects may be dissimilar by virtue of distinctive features but may become similar because of shared common features (Tversky, 1977). 
To summarize, the broad question of how features combine in perception has been studied extensively using subjective dissimilarity ratings. But it does not necessarily follow that the same results should hold for visual search, because the underlying feature representations and mechanisms may be different. A subject asked to rate dissimilarity between two objects has to set up an internal scale of dissimilarity and map it to a numeric scale, and this judgment can be based on visual, verbal, and semantic similarity (Torgerson, 1965). In contrast, locating an oddball target during visual search relies on an implicit preattentive dissimilarity signal that draws attention to the target (Duncan & Humphreys, 1989; Wolfe & Horowitz, 2004), but this dissimilarity may depend on low-level (Treisman & Gelade, 1980) or high-level visual features (Wolfe & Horowitz, 2004; Sripati & Olson, 2010) and may or may not be influenced by nonvisual factors (Wolfe & Horowitz, 2004). 
Relation to studies of redundancy gain
The fact that search is facilitated when the target differs in two features rather than just one from the distracters has been observed previously for color and size (Found, 1998) and for pairs involving color, orientation, and spatial frequency (Eckstein, Thomas, Palmer, & Shimozaki, 2000; Krummenacher, Muller, & Heller, 2001, 2002; Monnier, 2006). These results have been interpreted as examples of the redundancy gain effect observed in a variety of identification tasks when targets are defined redundantly through multiple features (as in Figure 1C) or are more numerous (Mordkoff & Yantis, 1991, 1993; Zehetleitner, Krummenacher, & Muller, 2009). 
How does this facilitation occur? Most accounts of visual search posit that responses are triggered when an accumulating salience or decision signal reaches a threshold (Figure 1D; Brown & Heathcote, 2008; Carpenter, Reddi, & Anderson, 2009; Schall, Purcell, Heitz, Logan, & Palmeri, 2011). In this context, the facilitation has been explained using two accounts that differ in the stage at which they posit features to combine (Zehetleitner et al., 2009). In the race model, each feature sets off accumulators that race to threshold and a response is made whenever the first accumulator reaches threshold (Mordkoff & Yantis, 1991, 1993). In the co-activation model, features combine before the accumulation stage to create a net salience or decision signal which then accumulates to threshold (Zehetleitner et al., 2009). The simplest co-activation model is one in which the net salience signal is a linear combination of decision signals arising from the individual features (Shimozaki, Eckstein, & Abbey, 2002). Although the co-activation model has received some empirical support (Zehetleitner et al., 2009), these models have not been tested quantitatively across a wide variety of conditions. 
Overview of the present study
The goal of this study was to assess how multiple features combine in visual search. To this end, we asked subjects to locate an oddball target in an array of identical distracters. The target object differed either in multiple features from the distracter or along each of the individual features. Because we were interested in relating multiple-feature searches to single-feature searches, we maximized the number of these conditions by using arrays at a fixed set size. This is however not a problem because there is no evidence that the order of difficulty between two searches reverses with set size: In other words, if Search A is harder than Search B at one set size, it remains harder at another set size but the magnitude of this difference depends on the respective search slopes (Nakayama & Martini, 2011). We then investigated whether multiple feature search could be explained using single feature searches. 
We tested a variety of race and co-activation models for their ability to predict multiple feature searches using single feature searches. Our model for visual search is identical to many accounts of visual search and decision making (Brown & Heathcote, 2008; Carpenter, Reddi, & Anderson, 2009; Schall et al., 2011): The behavioral response is triggered when a salience or decision signal is accumulated until it reaches a threshold (Figure 1D). We tested a variety of race models in which the reaction time in multiple feature search is a function of the reaction time in the individual feature searches. The simplest race model, in which independent accumulators are set off by each feature, is depicted in Figure 1E. To quantitatively investigate co-activation models, we needed an estimate of the underlying salience signal. In the standard visual search model, the reaction time is inversely proportional to the salience signal. Conversely, its reciprocal can therefore be taken as an estimate of the salience signal (Figure 1D). We tested a variety of co-activation models in which the salience signal in the multiple feature search is a function of the salience in the individual feature searches. The simplest co-activation model, in which saliences sum linearly, is depicted in Figure 1F
We conducted five visual search experiments on human subjects. The first three experiments involved rectangles which varied along simple salient features such as their length, intensity, and orientation. These features are separable according to classic studies, i.e., they combine linearly in dissimilarity ratings. In Experiment 1, targets differed in only one feature—intensity, length, or orientation. We sought to confirm whether search times depended on relative or absolute feature differences as predicted by classical psychophysical laws: For instance, does search time depend on the absolute or relative difference in length? We found that search depends on relative differences in length and intensity and on the absolute differences in orientation. 
In Experiment 2, we investigated how pairs of features combine in visual search. We tested all possible pairs involving intensity, length, and orientation of rectangles. We found support for three models that outperformed other models in their ability to predict the multiple feature search data. In Experiment 3, we tested how all three features combine in search, and found that one model outperformed all others: a co-activation model in which features combine linearly. 
In Experiment 4, we investigated how the length and width of rectangles interact in visual search. This is a classic example of an integral pair of features. As in the subjective dissimilarity studies, we found that a model in which length and width signals combine in a Euclidean manner accounts for the data better than a model in which they combine linearly. However, we made the surprising observation that when aspect ratio of the rectangle was included as an additional feature (along with length and width), the resulting model, in which features combine linearly, outperformed all other models including the Euclidean model. This is an important finding because it indicates that integral features can potentially become separable upon the inclusion of an additional feature. 
The results of Experiment 4 predicted that aspect ratio is a feature in visual search. If this is true, searches involving changes in aspect ratio should be easier than those that do not, even when the net change in length and width are equated. In Experiment 5, we confirmed this prediction for a variety of shapes. This demonstrates for the first time that aspect ratio is a feature that drives visual search in that its effect cannot be explained simply in terms of changes in length and width alone. 
Experiment 1—Single features
In Experiment 1, we characterized how search performance depends on differences in individual features. There were three separate tasks in which targets differed either in intensity, length, or orientation from the distracters. Our goal was to characterize whether visual search depended on relative or absolute differences in these features and to compare how search time varies with these differences. 
Methods
Subjects
A total of 16 subjects, aged 20–30 years, with normal or corrected-to-normal vision participated in this experiment. Subjects were naïve to the purpose of the experiments and gave written consent to a protocol approved by the Institutional Ethics Committee of the Indian Institute of Science. They were seated approximately 60 cm from a computer monitor, which was under control of custom Matlab programs in Psychtoolbox (Brainard, 1997) and performed three separate visual search tasks involving intensity, length, or orientation differences. 
Stimulus design
To investigate whether visual search depended on relative or absolute differences in a feature, we chose two baseline levels about which the target and distracter can vary and then varied the absolute feature difference between the target and distracters about each baseline level. The relative feature difference was defined as the absolute difference divided by the average feature value of the target and distracter (e.g., for orientations 30° and 50°, the absolute difference is 20° and the relative difference is 0.5). We chose feature differences and baseline levels according to a geometric progression to maximize the number of comparable conditions for both absolute and relative feature differences. Specifically, we chose relative feature differences as (a, ar, ar2, ar3, ar4) and chose baseline levels as B and r2B. For a given baseline B and relative feature difference R, the absolute difference is RB, and the corresponding target and distracter feature values were chosen either as B + RB/2 and BRB/2 or as BRB/2 and B + RB/2, so that the average feature level was B. If visual search depended only on relative feature differences, then search reaction time should not differ between the two baseline levels for all five conditions. The corresponding absolute feature differences are (a, ar, ar2, ar3, ar4)B at baseline B and (ar2, ar3, ar4, ar5, ar6)B at baseline r2B. Thus, three absolute feature differences (ar2, ar3, ar4) can be compared at the two baselines. If visual search depended only on absolute feature differences, then search performance should be identical for these three conditions at the two baselines. We set a = 0.3, 0.21, and 0.3 for intensity, length, and orientation, respectively, and r = 1.25 for all three features. These values were chosen based on a pilot experiment on a separate group of subjects to ensure that search times vary for these feature differences. An appropriate baseline level B was chosen for each feature (see below). In all, there were 5 Relative Feature Levels × 2 Baseline Levels, and these 10 conditions were repeated a total of 16 times (with target having a larger or smaller feature value than the distracter equally often, i.e., eight times each). 
As an example, consider how this works for length. With B set to 1.02° of visual angle (hereafter, dva), the two baseline lengths were B = 1.02 dva and r2B = 1.59 dva. For a baseline length of 1.02 dva and relative length value of ar2 = 0.328, the absolute feature difference is then 0.334 dva. As a result, the target and distracter had lengths of 1.19 dva and 0.85 dva or vice versa. Likewise, for the baseline length of 1.59 dva and relative length difference of 0.328, the absolute length difference is 0.52 dva, and the target and distracter lengths were 1.33 dva and 1.85 dva or vice versa. 
Intensity task
Eight subjects participated in this task. Stimuli consisted of vertical rectangular bars measuring 1.27 dva × 0.16 dva. Intensity was measured in terms of fraction of the maximum allowable value (i.e., 255). The baseline value B was 0.38. 
Length task
Four subjects participated in this task. Stimuli were vertical rectangular bars with a fixed width of 0.16 dva with intensity 1.0 and a baseline length of 1.02 dva. 
Orientation task
Four subjects participated in this task. Stimuli were rectangular bars measuring 1.27 dva × 0.16 dva with intensity 1.0 and a baseline orientation of 40° counterclockwise from the horizontal. 
Procedure
Each task began with a motor reaction block (20 trials) to estimate the subjects' motor reaction time. The subject was asked to indicate the side of the screen (left or right) on which a white circle appeared (Z for left, M for right). This was followed by the main visual search block. Each trial began with a fixation cross (0.05 dva) that appeared for 0.5 s on a blank screen, followed by a 4 × 4 search array (with 4.07 dva interitem spacing) consisting of one oddball target among homogeneous distracters. The position of each item was jittered randomly according to a uniform distribution with a range of ±0.16 dva about its center to prevent low order cues such as item alignment from influencing search. A red vertical line (width 0.11 dva) was displayed along the middle of the screen to facilitate left/right judgments. The search array was displayed for 10 s or until the subject made a response, whichever was sooner. Timed out trials and error trials were repeated randomly later during the task. There were a total of 160 (5 Feature Difference Levels × 2 Baseline Values × 16 Repetitions) correct trials. Subjects were asked to report as quickly and accurately as possible the side of the screen on which the target appeared using a key press (Z for left, M for right). For a particular target-distracter pair there were eight trials in which the target appeared on the left and the remaining eight trials in which the target appeared on the right half of the screen. Target location and trial order were randomized across trials. 
Results
Subjects were highly consistent with each other in their performance (correlation in mean search times across unique conditions between two random halves of subjects: r = 0.99, p = 0.002 for intensity; r = 0.99, p = 0.00081 for length; and r = 0.99, p = 0.0012 for orientation). 
To establish whether relative or absolute intensity differences drive visual search, we plotted search reaction times averaged across repetitions and subjects as a function of absolute intensity difference (Figure 2A) and as a function of relative intensity difference (Figure 2B). It can be seen that varying the baseline had a larger impact on search reaction times as a function of absolute intensity difference (Figure 2A) compared to relative intensity differences (Figure 2B). To assess the statistical significance of this effect, we took search times averaged across repetitions and then calculated the absolute difference between these search times at the two baseline levels for each feature difference level. The impact of baseline on absolute intensity difference (mean RT difference = 0.83 s) was larger than its impact on relative intensity difference (mean RT difference = 0.38 s), and this difference was statistically significant as assessed using an analysis of variance (ANOVA) on the absolute RT differences with subject and feature type (relative/absolute) as factors, F(1, 32) = 14.69, p = 0.006 for main effect of feature type; F(7, 32) = 1, p = 0.45 for main effect of subject; and F(7, 32) = 0.55, p = 0.79 for interaction effects. We conclude that visual search depends on relative rather than absolute intensity, as would be expected from Weber's law for brightness. 
Figure 2
 
Single feature variations (Experiment 1). (A) Search reaction time plotted as a function of absolute intensity difference between the target and distracters for the two baseline conditions (squares: Baseline 1; circles: Baseline 2). Points represent the mean search reaction time for each condition with error bars depicting the SEM across trials. (B) Search reaction time as a function of relative difference in intensity for the two baseline conditions. It can be seen that search times are virtually the same despite changing baseline, suggesting that search depends on relative rather than absolute intensity differences. (C) Reciprocal of search time plotted against the relative intensity difference for the two baselines showing a linear relationship. The slope and intercept of the regression line, as well as the correlation coefficient (r = 0.97), are shown. Asterisks indicate statistical significance (**** is p < 0.00005). (D)–(F) Similar plots for length, showing that search depends on relative rather than absolute length differences. (G)–(I) Similar plots for orientation showing here that search depends on absolute rather than relative orientation differences. In all cases, reciprocal RT was linear with feature differences.
Figure 2
 
Single feature variations (Experiment 1). (A) Search reaction time plotted as a function of absolute intensity difference between the target and distracters for the two baseline conditions (squares: Baseline 1; circles: Baseline 2). Points represent the mean search reaction time for each condition with error bars depicting the SEM across trials. (B) Search reaction time as a function of relative difference in intensity for the two baseline conditions. It can be seen that search times are virtually the same despite changing baseline, suggesting that search depends on relative rather than absolute intensity differences. (C) Reciprocal of search time plotted against the relative intensity difference for the two baselines showing a linear relationship. The slope and intercept of the regression line, as well as the correlation coefficient (r = 0.97), are shown. Asterisks indicate statistical significance (**** is p < 0.00005). (D)–(F) Similar plots for length, showing that search depends on relative rather than absolute length differences. (G)–(I) Similar plots for orientation showing here that search depends on absolute rather than relative orientation differences. In all cases, reciprocal RT was linear with feature differences.
We performed similar analyses for the length task. We plotted search times as a function of absolute length differences (Figure 2D) and as a function of relative length differences (Figure 2E). Once again, varying the baseline had a significantly greater impact when search times were plotted as a function of the absolute length difference than for the relative length difference, mean RT differences: 1.28 s for absolute length; 0.32 s for relative length; F(1, 16) = 36.8, p = 0 for main effect of feature type; F(3, 16) = 7.5, p = 0.002 for main effect of subject; F(3, 16) = 2.16, p = 0.13 for their interaction. We conclude that visual search depends on relative rather than absolute differences in length, again as expected based on classical Weber's laws for length. 
We then investigated whether visual search depends on absolute or relative differences in orientation. To this end, we plotted average search times as a function of absolute orientation difference (Figure 2G) or relative orientation difference (Figure 2H). Here, we found the opposite trend compared to intensity and length: The impact of baseline was significantly smaller for absolute rather than relative orientation differences, mean RT difference between baselines: 0.21 s for absolute orientation; 0.736 s for relative orientation; F(1, 16) = 8.28, p = 0.01 for main effect of feature, other effects were not significant: F(3, 16) = 0.5, p = 0.68 for subject and F(3, 16) = 0.15, p = 0.93 for interactions. We conclude that visual search depends on absolute rather than relative orientation differences. 
A notable aspect of all three tasks is that search times are nonlinearly related to the feature difference, whether relative or absolute. Thus, although search times do increase with increasing target-distracter similarity, this relationship is nonlinear. To assess the nature of this nonlinearity, we plotted the reciprocal of the average search time in each condition against the relative intensity difference (Figure 2C), relative length difference (Figure 2F), and absolute orientation difference (Figure 2I). We found a striking linear relationship in all three cases (r = 0.97 for intensity, r = 0.98 for length; r = 0.98 for orientation, p < 0.00005 in all cases). 
The linear relationship between 1/RT and feature differences might be a special case of RT being related to the logarithm of the feature difference. To assess this possibility, we calculated the correlation between RT and the logarithm of the feature difference for the three features. The resulting correlations (r = −0.95 for intensity, r = −0.98 for length, and r = −0.89 for orientation) were equal or lower than the correlation between 1/RT and the feature difference. We conclude that search times are inversely proportional to the feature difference. 
To summarize, there were two main findings from this experiment: First, visual search is driven by relative differences in intensity and length and by absolute differences in orientation. Second, the reciprocal of the visual search time increases linearly in all cases with the corresponding feature difference (relative or absolute). The first finding is consistent with classical Weber's law for subjective magnitude for intensity and length, but it has not been characterized in visual search. The second finding extends our previous observation that 1/RT is linear with orientation difference (Arun, 2012) to two additional features—intensity and length. It must be noted that the linear relationship between 1/RT and feature difference holds primarily in the “dynamic” range, where search times vary appreciably with feature differences. For example, search times are inversely proportional to orientation differences up to 45°, but do not decrease any further—there is no detectable difference in search time whether targets differ by 50° or 70° from the distracters. 
Experiment 2—Two features
In Experiment 2, we investigated how two features combine in visual search. We tested all possible pairs of features involving intensity, length, and orientation. In each task involving a pair of features, subjects performed searches where the target could differ from the distracter in one or both features. We then used the single feature data to predict search in the multiple feature conditions. 
Methods
Subjects
A total of 42 subjects participated in three tasks: intensity length (IL, n = 16), intensity orientation (IO, n = 14), and length orientation (LO, n = 12). Other details are as in Experiment 1
Stimulus design
The design was similar to Experiment 1—there were five nonzero levels for each feature and two baseline levels, with values as before. Since we were interested in conditions where either one or two feature could vary, we took each feature to have a total of six possible levels including zero. This resulted in a total of 35 search conditions (6 Levels for Feature 1 × 6 Levels for Feature 2, excluding the null search where both feature differences are zero). Of these, 25 conditions involved two-feature differences, and 10 involved only one feature. Because the design included the single feature conditions with varying baselines, we verified that the results of Experiment 1 hold even when single feature searches are intermixed with other searches (data not shown). 
Procedure
For each feature difference condition, there were a total of 16 trials (2 Baseline Levels for Feature-1 × 2 Baseline Levels for Feature-2 × 4 Repetitions). The four repetitions consisted of trials in which the feature difference for both features was either positive or negative for the target compared to the distracters, with the target occurring either on the left or right. Thus there were a total of 560 trials in each task. All other details were identical to Experiment 1, with the exception that trials timed out 7.5 s after search display onset. 
Model fitting
For each unique pair of feature differences, we calculated the average search reaction times across subjects (RT12), and the average reaction times in the corresponding single-feature conditions (RT1 and RT2). We then fit a linear model where the observed variable was regressed against the predictors corresponding to each model using the equation y = Xb, where y is the vector of observed multiple-feature search data (RT or reciprocal RT), X is a matrix containing the single feature conditions, and b is a vector of weights corresponding to each feature. 
Model comparisons
An important consideration in comparing models is that more complex models automatically provide better fits to the data because of their greater degree of freedom. To address this issue, we used a composite measure of quality of fit, the corrected Akaike Information Criterion or AICc (McMahon & Olson, 2009), which penalizes models for the number of free parameters. This allowed us to assess the performance of each model to fit the data independent of its intrinsic complexity. For each model, the AICc was calculated as AICc = N*log(SS/N) + 2K + 2K(K + 1)/(NK − 1), where SS is the sum-of-squared difference between predicted and observed values, N is the number of observations, and K is the number of free parameters in the model. Larger absolute values of AICc indicate better model performance. 
To compare the quality of fit between two models, we obtained bootstrap-based samples of AICc for each model as follows: We selected 25 feature conditions with replacement from the 25 multiple feature conditions and then took the corresponding search times for single and multiple feature conditions. We then fit the model and calculated the AICc. This procedure was repeated 100 times to estimate the variation in model AICc expected if a similar experiment was repeated many times. We then performed an unpaired t test between the bootstrap-derived AICc samples to assess whether the mean AICc values were significantly different between the two models. We obtained qualitatively similar results on changing the bootstrap sample size. 
Results
Subjects performed visual search tasks in which the target could differ in either one or two features at a time from the distracters. There were three separate tasks, in which we tested intensity length (IL), intensity orientation (IO), and length orientation (LO). Subjects were extremely consistent in their responses (split-half correlations across the 35 search conditions: r = 0.98 for IL, r = 0.97 for IO, r = 0.98 for LO, p < 0.00005 in all cases). 
Example search displays from the LO experiment are shown in Figure 1. When the target differs both in length and orientation (Figure 1C), search is easier compared to when it differs in length alone (Figure 1A) or in orientation alone (Figure 1B). Thus, both length and orientation seem to contribute in making the two-feature search easy. To assess whether this was true across all conditions, we plotted the average search times across subjects as a function of differences in one feature while holding the other feature difference constant at different levels. The resulting plot for intensity-length (Figure 3A) shows that even when the relative intensity difference is large (Figure 3A, ΔIr = 0.73), changing the relative length had an effect on search reaction times. We observed a similar pattern for both intensity-orientation (Figure 3B) and length-orientation (Figure 3C) tasks. These trends were not the result of a speed accuracy trade-off: Decreases in reaction times were always accompanied by increases in accuracy (Figures 3DF). Note that because most trials involved targets differing in two features from the distracters, subjects could potentially have performed the task by attending to only one feature. Instead, they clearly seem to have utilized both features to do the task. 
Figure 3
 
Two-feature searches (Experiment 2). (A) Search times plotted against relative length difference for different levels of relative intensity differences (denoted as ΔIr). The red line depicts the case when the target differs only in length but not in intensity (ΔIr = 0) from the distracters. The other lines represents search times when the target differs in intensity by a fixed level (ΔIr varying from 0.30 to 0.73) and relative length difference (ΔLr) is varied from zero to 0.73. Error bars represent the SEM calculated across trials. It can be seen that both features combine in search. (B) and (C): Similar plots for the intensity-orientation and length-orientation tasks. (D)–(F) Average accuracy for each condition in the three experiments. Error bars represent SEM across subjects. (G) Observed reciprocal RT for the multiple feature searches in the intensity-length task plotted against the reciprocal RT predicted as a linear combination of the individual feature 1/RTs (i.e., using searches in which ΔIr = 0 or ΔLr = 0). The correlation coefficient is depicted at the top left. Asterisks represent statistical significance with conventions as before. (H) and (I): Observed versus predicted reciprocal RTs in the intensity-orientation and the length-orientation tasks, respectively.
Figure 3
 
Two-feature searches (Experiment 2). (A) Search times plotted against relative length difference for different levels of relative intensity differences (denoted as ΔIr). The red line depicts the case when the target differs only in length but not in intensity (ΔIr = 0) from the distracters. The other lines represents search times when the target differs in intensity by a fixed level (ΔIr varying from 0.30 to 0.73) and relative length difference (ΔLr) is varied from zero to 0.73. Error bars represent the SEM calculated across trials. It can be seen that both features combine in search. (B) and (C): Similar plots for the intensity-orientation and length-orientation tasks. (D)–(F) Average accuracy for each condition in the three experiments. Error bars represent SEM across subjects. (G) Observed reciprocal RT for the multiple feature searches in the intensity-length task plotted against the reciprocal RT predicted as a linear combination of the individual feature 1/RTs (i.e., using searches in which ΔIr = 0 or ΔLr = 0). The correlation coefficient is depicted at the top left. Asterisks represent statistical significance with conventions as before. (H) and (I): Observed versus predicted reciprocal RTs in the intensity-orientation and the length-orientation tasks, respectively.
Next, we investigated the quantitative relationship between the multiple feature searches and the single feature searches. We tested two categories of models: race models (based on RT), and co-activation models (based on the reciprocal RT). These models differ in the stage at which multiple features combine: in RT-based models, each feature sets off separate accumulators that may or may not interact. In 1/RT-based models, salience signals corresponding to each feature combine before accumulating to threshold. In each category, we tested five simple relationships between the multiple feature and single feature searches (see Table 1 for equations). In the first model, multiple feature search depends solely on the easier of the two single-feature searches. In the second model, the multiple feature search depends on a linear combination of the single-feature searches. The third model represents a multiplicative interaction between the single-feature searches. The fourth model is a linear model with interaction terms—a combination of the preceding two models. The fifth model represents a Euclidean combination of the two single feature conditions. Note that these models specify different ways in which features might combine in search. In each case, model parameters were adjusted to minimize the sum-of-squared error between the model and the data using linear regression. 
Table 1
 
Summary of model performance in Experiments 2 and 3. In the formulae, RT12, RT1, and RT2 (or d12, d1, d2) are search times (or reciprocal RTs) in the multiple feature condition and in the two single-feature conditions, respectively. Formulas are shown only for the two-feature conditions in Experiment 2 for simplicity but included the third single-feature conditions (RT3 or d3) in Experiment 3. Model performance was measured using the correlation coefficient between the model predictions and the data (RT or 1/RT as applicable). To account for differences in the number of parameters between models and to compare models on the same scale, we calculated a quality-of-fit measure (AICc—see text) using the observed and predicted reciprocal RTs for each model. Larger AICc values represent better fits. The best model is highlighted in bold—the additive 1/RT model. Asterisks represent statistical significance of the comparison between each model with the best model (obtained using a Fisher's z test for correlation coefficients and using an unpaired t test on bootstrap samples for AICc; p < 0.05).
Table 1
 
Summary of model performance in Experiments 2 and 3. In the formulae, RT12, RT1, and RT2 (or d12, d1, d2) are search times (or reciprocal RTs) in the multiple feature condition and in the two single-feature conditions, respectively. Formulas are shown only for the two-feature conditions in Experiment 2 for simplicity but included the third single-feature conditions (RT3 or d3) in Experiment 3. Model performance was measured using the correlation coefficient between the model predictions and the data (RT or 1/RT as applicable). To account for differences in the number of parameters between models and to compare models on the same scale, we calculated a quality-of-fit measure (AICc—see text) using the observed and predicted reciprocal RTs for each model. Larger AICc values represent better fits. The best model is highlighted in bold—the additive 1/RT model. Asterisks represent statistical significance of the comparison between each model with the best model (obtained using a Fisher's z test for correlation coefficients and using an unpaired t test on bootstrap samples for AICc; p < 0.05).
To assess the quality of fit of each model, we calculated the correlation coefficient between observed and predicted search data, as well as the corrected Akaike Information Criterion (AICc), which measures the sum-of-squared error while taking into account differences in the number of free parameters. Model performance for each experiment (IL, IO, and LO) is summarized in Table 1. Across all three tasks, three models consistently outperformed all others both in terms of correlation with the data and in their quality of fit (AICc): the interaction RT model, the additive 1/RT model, and the interaction 1/RT model. Among these three models, however, no single one of them outperformed the others consistently across all three tasks. 
To visualize the degree of fit obtained by these models, we selected the additive 1/RT model as representative of the three models and plotted the observed reciprocal RT in the multiple feature condition against the predicted values for each task (Figures 3G–I). The resulting correlations are extremely high and statistically significant (r = 0.99, 0.94, and 0.97 for the IL, IO, and LO tasks, respectively; p < 0.00005 in all cases). The best-fitting model coefficients (Table 2) offer additional insights into the underlying mechanisms used by subjects: For example, the model coefficients indicate that subjects relied more on length in the IL task, on intensity in the IO task, and on orientation in the LO task (Table 2). Interestingly, this agreed with the subjects' own reports after the experiment: Most subjects said they relied on length in the IL task and on orientation in the LO task. There were mixed reports from the subjects in the IO task. 
Table 2
 
Best-fitting model coefficients for Experiments 2 and 3. For each model, the best-fitting coefficients are found by a linear regression between the multiple feature search and the corresponding individual feature conditions (using either RT or 1/RT as the measure).
Table 2
 
Best-fitting model coefficients for Experiments 2 and 3. For each model, the best-fitting coefficients are found by a linear regression between the multiple feature search and the corresponding individual feature conditions (using either RT or 1/RT as the measure).
In summary, we conclude that, when a target differs in two features from the distracters, individual feature searches can be combined to predict multiple feature searches. We found three models equivalent to each other but which outperformed all other models. Of these, the additive 1/RT model outperformed other models in subsequent experiments (see below). 
Experiment 3—Three features
The main finding of Experiment 2 was that three models explain the multiple feature condition in terms of the single feature searches. The goal of this experiment was to investigate this relationship when a target differs in three features from the distracters (intensity, length, orientation). 
Methods
Subjects
Eight subjects participated in this task. Other details are as in Experiment 1
Stimulus design
The design was similar to Experiment 1—there were four nonzero feature difference levels for each feature and one baseline level (B = 0.47 for intensity, 1.27 dva for length, and 50° for orientation). This resulted in a total of 64 search conditions where all three features were different (4 × 4 × 4 levels for Feature 1, 2, 3). In addition to this, there were 12 single feature conditions (3 Features × 4 Levels each) in which the target differed along only one feature. Thus there were a total of 76 search conditions. 
Procedure
For each feature difference condition, there were a total of 16 repetitions as before, resulting in 1,216 trials. All other details were identical to Experiment 1
Results
Subjects searched for oddball targets that could differ in either one or three features from the distracters and were extremely consistent in their performance (split-half correlation: r = 0.97, p < 0.00005). Example search displays from this experiment are shown in Figure 4. Search is easier when the target differs in all three features (Figure 4D) than when it differs in any one of the individual features (Figures 4AC). We performed analyses exactly as before to compare models based on RT and reciprocal RT. Model performance is summarized in Table 1 (model coefficients are in Table 2). We found that the additive 1/RT model had the highest correlation with the data (Figure 4E), and outperformed all other models in terms of quality of fit (comparison of AICc values: p < 0.00005). Unlike in the previous experiment where the additive 1/RT model was equivalent in performance to the interaction models, here this model outperformed all models in terms of quality of fit. Thus, we conclude that, when a target varies along three features (intensity, length, orientation) from its distracters, the resulting reciprocal RT can be explained almost entirely by a linear sum of reciprocal RTs pertaining to each individual feature. In other words, features in visual search combine linearly. 
Figure 4
 
Three-feature searches (Experiment 3). Example search displays in which the target differed in (A) intensity, (B) length, (C) orientation, and (D) all three features (ΔIr = 0.3, ΔLr = 0.38, ΔO = 24°). Target intensity in the display is only approximate and does not reflect the true intensity used in the experiment. Average search reaction times (mean ± SEM) are shown below. It can be seen that the three features combine to produce easier search in (D). (E) Observed versus predicted reciprocal RTs for the additive 1/RT model.
Figure 4
 
Three-feature searches (Experiment 3). Example search displays in which the target differed in (A) intensity, (B) length, (C) orientation, and (D) all three features (ΔIr = 0.3, ΔLr = 0.38, ΔO = 24°). Target intensity in the display is only approximate and does not reflect the true intensity used in the experiment. Average search reaction times (mean ± SEM) are shown below. It can be seen that the three features combine to produce easier search in (D). (E) Observed versus predicted reciprocal RTs for the additive 1/RT model.
Experiment 4: Integral features
Experiments 2 and 3 suggest that, at least for the features tested (intensity, length, orientation), reciprocal RTs sum linearly when the target differs along multiple features from the distracters. This is a novel finding in the context of visual search but is consistent with the finding that they are separable in classic dissimilarity studies. But do only separable features sum linearly? How do integral features combine? We tested this possibility using a classic pair of integral features—namely the length and width of a rectangle. 
Methods
Subjects
Twelve subjects participated in this study. Other details are as in Experiment 1
Stimuli and procedure
All details were exactly as in Experiment 2, except that the two features were length (L) and width (W) of rectangular bars. There were two baseline levels for each feature (B = 1.02 dva for length, 0.25 dva for width; a = 0.21, r = 1.25). Changes in length and width were constrained such that the rectangular bars were always vertical in orientation with its length greater than width. 
Model fitting
In this experiment, we tested two additional features of rectangles for their ability to explain the multiple feature search data: these were the area (A) and the aspect ratio (S = L/W). Because these features covaried with changes in length and width, we adopted a different approach towards model fitting. We reasoned that since reciprocal RTs vary linearly with the underlying feature difference, then the feature difference itself could be used to predict the multiple feature data. Thus, for each search condition, we calculated the relative difference in length, width, aspect ratio, and area as potential predictors. We then sought to explain the multiple-feature search dissimilarities using models that combine these feature differences directly. This allowed us to study the relationships between the different features—length, width, area, and aspect ratio—for the same set of search conditions in a manner that was not biased by the experiment design. 
Results
Subjects searched for oddball targets which could differ in length and width from the distracters and were extremely consistent in their responses (split-half correlation, r = 0.98, p < 0.00005). Example displays from this task are shown in Figure 5. It can be seen that search involving a target differing in both length and width is easier than searches involving the individual features. 
Figure 5
 
Rectangle length and width (Experiment 4). Example search displays in which the target differs from the distracters (A) only in length and (B) only in width and (C) in both length and width (ΔLr = 0.3 and ΔWr = 0.3). It can be seen that both length and width contribute to the overall search in (C). (D) Observed versus predicted reciprocal RTs for the best model (additive 1/RT model with length, width, and aspect ratio). (E) Quality of fit for some of the models tested in this experiment, as measured using the Akaike's Information Criterion (AICc) (for the full set of models, see Table 3). Larger values of AICc indicate better fits (see text). Acronyms in each model description refer to the quantities being used for prediction by the model (L is length; W is width, A is area, S is aspect ratio). The best model (additive LWS) performed significantly better than all other models. For a subset of comparisons, the statistical significance of the AICc comparisons is indicated beside the bars using asterisks. Asterisks beside correlation coefficients represent their statistical significance. * is p < 0.05, **** is p < 0.00005.
Figure 5
 
Rectangle length and width (Experiment 4). Example search displays in which the target differs from the distracters (A) only in length and (B) only in width and (C) in both length and width (ΔLr = 0.3 and ΔWr = 0.3). It can be seen that both length and width contribute to the overall search in (C). (D) Observed versus predicted reciprocal RTs for the best model (additive 1/RT model with length, width, and aspect ratio). (E) Quality of fit for some of the models tested in this experiment, as measured using the Akaike's Information Criterion (AICc) (for the full set of models, see Table 3). Larger values of AICc indicate better fits (see text). Acronyms in each model description refer to the quantities being used for prediction by the model (L is length; W is width, A is area, S is aspect ratio). The best model (additive LWS) performed significantly better than all other models. For a subset of comparisons, the statistical significance of the AICc comparisons is indicated beside the bars using asterisks. Asterisks beside correlation coefficients represent their statistical significance. * is p < 0.05, **** is p < 0.00005.
Table 3
 
Model performance in Experiment 4. Each model depicts a particular relationship between observed reciprocal RT (d12) and relative feature differences in length (L), width (W), aspect ratio (S), and/or area (A). Aspect ratio was defined as the ratio of length to width, and area was their product. This definition of aspect ratio gave better model performance than defining aspect ratio as width/length. The best model, highlighted in bold, was a model in which relative differences in length, width, and aspect ratio combined linearly. Asterisks beside the correlations and AICc values depict the statistical significance of the comparison between the best model and each model. For each model, the best-fitting model coefficients are given in the equations.
Table 3
 
Model performance in Experiment 4. Each model depicts a particular relationship between observed reciprocal RT (d12) and relative feature differences in length (L), width (W), aspect ratio (S), and/or area (A). Aspect ratio was defined as the ratio of length to width, and area was their product. This definition of aspect ratio gave better model performance than defining aspect ratio as width/length. The best model, highlighted in bold, was a model in which relative differences in length, width, and aspect ratio combined linearly. Asterisks beside the correlations and AICc values depict the statistical significance of the comparison between the best model and each model. For each model, the best-fitting model coefficients are given in the equations.
Sl. # Model / Description Correlation With Observed 1/RT Quality of Fit (AICc)
1. Additive LW / d12 = 0.5L + 0.35W + 0.11 0.93 799*
2. Euclidean LW / d122 = 0.61L2 + 0.41W2 + 0.006 0.91* 827*
3. Interaction LW / d12 = 0.65L + 0.51W + 0.34LW + 0.04 0.94 807*
4. Additive LWS / 0.95 834
5. Additive LWA / d12 = 0.27L + 0.12W + 0.27A + 0.09 0.93 797*
6. Additive AS / d12 = 0.51A + 0.13S + 0.02 0.93 798*
7. Euclidean AS / d122 = 0.31A2 + 0.17S2 – 0.003 0.89* 795*
8. Interaction AS / d12 = 0.55A + 0.24S – 0.14AS – 0.008 0.93 798*
9. A only / d12 = 0.49A + 0.08 0.91* 775*
10. S only / d12 = −0.05S + 0.48 0.07* 530*
We then tested a variety of models that relate multiple feature search to single feature searches. As before, models based on 1/RT outperformed models based on search times (not shown for simplicity). Model performance is summarized in Table 3. The additive length and width model had a lower quality of fit compared to the Euclidean model (unpaired t test on bootstrap-sampled AICc values, p < 0.00005). This is consistent with the classic finding that dissimilarities related to length and width combine in a Euclidean manner (Wender, 1971; Krantz & Tversky, 1975; Wiener-Ehrlich, 1978; Dunn, 1983). 
Intrigued by the success of the Euclidean model, we investigated whether the addition of an interaction term (i.e., the product of differences in L & W) would explain the data better. This was not the case: the interaction model was better than the additive linear model but was outperformed by the Eucliean model (r = 0.94 for the interaction model; both AICc comparisons p < 0.00005; Figure 5E). Following previous work, we also tested two other features of rectangles: the area (A) and aspect ratio (S). Both area and aspect ratio change when a rectangle changes in length or width, and so we tested whether area or aspect ratio alone might account for the multiple feature searches. This was not the case—models that combined length and width linearly performed better than models involving area or aspect ratio in various ways (Table 3). 
On exhaustively testing many possible combinations of length, width, area, and aspect ratio, we found one model that outperformed all other models (Table 3). In this model, reciprocal RT for a target differing in length and width from the distracters is a linear combination of reciprocal RTs when length, width, or aspect ratio alone differs. This model outperformed all other models (AICc comparison with all other models: p < 0.00005) and had the highest correlation with the data (r = 0.95). This is a somewhat surprising result because it implies that features that appear integral, such as the length and width of a rectangle, can become separable in the presence of an additional feature, namely aspect ratio. Why then did the Euclidean and interaction models work better than the additive model? 
This can be explained mathematically as follows: let the distracter rectangles have length L and width W. Then the aspect ratio is SD = L/W. Let the target rectangle have dimensions (L + ΔL, W + ΔW). Then the relative change in length and width is dL = ΔL/L and dW = ΔW/W, respectively. Then the aspect ratio of the target is ST = (L + ΔL)/(W + ΔW). Then the relative difference in aspect ratio (dS) between target and distracter can be written as:  where the last equation results from using a Taylor series expansion of the denominator. It can be seen that the aspect ratio term is approximated by product terms such as dLdW as well as squared-dissimilarity terms such as dW2. As a result, when aspect ratio is not explicitly included as a term in the model, any model that contains interaction or squared-dissimilarity terms will provide better fits to the data. This is exactly what we have found. However, because neither the Euclidean or interaction terms capture the aspect ratio effect precisely, including the exact aspect ratio term results in a significant improvement above these models. 
To illustrate the effect of aspect ratio in visual search, we selected searches in which the relative length and width differences between the target and distracters are approximately equal (Figure 6). If search was based only on length and width differences, all three searches should be equally difficult. However, for two of the searches (Figures 6A, B), the relative length and width differences were such that the target and distracter also differed in aspect ratio. In the third search (Figure 6C), the relative length and width differences are equal, due to which the target and distracters have the same aspect ratio. Thus, if dissimilarity in search was due to not only length and width but also due to aspect ratio, the net dissimilarity in the first two searches will be larger than the third search. Indeed, it can be seen that the third search is harder than the first two. 
Figure 6
 
Example searches illustrating the role of aspect ratio in visual search (Experiment 4). Three searches are shown in which the net dissimilarity due to relative length (ΔLr) and relative width (ΔWr) are approximately equal. In (A) and (B), the target differs in length and width as well as aspect ratio. In (C) the target differs in length and width but not in aspect ratio from the distracters. Searches (A) and (B) are comparable in difficulty but search (C) is slightly harder. This pattern cannot be explained by length and width. It can be explained instead by the fact that the target in (C), despite having the same net salience for length and width, has no aspect ratio difference from the distracters.
Figure 6
 
Example searches illustrating the role of aspect ratio in visual search (Experiment 4). Three searches are shown in which the net dissimilarity due to relative length (ΔLr) and relative width (ΔWr) are approximately equal. In (A) and (B), the target differs in length and width as well as aspect ratio. In (C) the target differs in length and width but not in aspect ratio from the distracters. Searches (A) and (B) are comparable in difficulty but search (C) is slightly harder. This pattern cannot be explained by length and width. It can be explained instead by the fact that the target in (C), despite having the same net salience for length and width, has no aspect ratio difference from the distracters.
We conclude that when a target differs from the distracters in both length and width, the resulting search can be explained as a linear sum of dissimilarities arising from length, width, and aspect ratio. Thus, integral features such as length and width of a rectangle become separable upon the inclusion of an additional feature, the aspect ratio. 
Experiment 5: Aspect ratio
In Experiment 4, we found that searches involving rectangles with equal aspect ratio are harder than rectangles that differ in aspect ratio, provided the total change in length and width is held constant. Here, we set out to investigate whether this result held for other shapes. Our hypothesis was that if aspect ratio drives visual search, then search for a target differing in aspect ratio should be easier than search for a target with the same aspect ratio as the distracters. Since changes in aspect ratio always involve changes in length and width, it is critical to equate these changes while testing for aspect ratio. Otherwise any observed difference in search performance can be attributed to length and width. Accordingly, we ensured that the net change in length and width was constant across all the searches. 
Although at first we considered using natural objects, they are not suitable because changing their aspect ratio will change features other than their length and width. Consider for example the giraffe depicted in Figure 7A. Changing its aspect ratio will inevitably change the orientation of its neck and the curvature of several contours. As a result, search for a giraffe with different aspect ratio against the original giraffe will be easy not just because of the change in aspect ratio but also due to changes in orientation, curvature, etc. Therefore, the only shapes whose aspect ratio can be manipulated without changing features other than length and width are those containing horizontal and vertical orientations. We accordingly chose eight such shapes for this experiment. 
Figure 7
 
Aspect ratio drives visual search (Experiment 5). (A) Example objects with same aspect ratio and different aspect ratio. A giraffe is not suitable because changing its aspect ratio also changes its neck orientation as well as other features. The only class of objects where changes in aspect ratio do not change any other feature apart from length or width are those that contain only horizontal and vertical orientations. Note that the net change in length and width is 50% for both same and different aspect ratio images. (B) Example search displays in which the target has the same aspect ratio (left) or different aspect ratio (right). In both cases the net change in length and width is the same, yet subjects took longer to find the target in the same aspect ratio displays. This difference in search time can only be explained if aspect ratio drives visual search. Average search times (mean ± SEM across subjects) are depicted below each display. (C) Average search reaction times (error bars represent SEM) for the same (green) and different (blue) aspect ratio conditions for individual shapes, together with the ensemble averages (extreme right). Asterisks represent the statistical significance of the main effect of aspect ratio (same versus different) as assessed using an ANOVA on search times (*** is p < 0.0005, ** is p < 0.005, and * is p < 0.05).
Figure 7
 
Aspect ratio drives visual search (Experiment 5). (A) Example objects with same aspect ratio and different aspect ratio. A giraffe is not suitable because changing its aspect ratio also changes its neck orientation as well as other features. The only class of objects where changes in aspect ratio do not change any other feature apart from length or width are those that contain only horizontal and vertical orientations. Note that the net change in length and width is 50% for both same and different aspect ratio images. (B) Example search displays in which the target has the same aspect ratio (left) or different aspect ratio (right). In both cases the net change in length and width is the same, yet subjects took longer to find the target in the same aspect ratio displays. This difference in search time can only be explained if aspect ratio drives visual search. Average search times (mean ± SEM across subjects) are depicted below each display. (C) Average search reaction times (error bars represent SEM) for the same (green) and different (blue) aspect ratio conditions for individual shapes, together with the ensemble averages (extreme right). Asterisks represent the statistical significance of the main effect of aspect ratio (same versus different) as assessed using an ANOVA on search times (*** is p < 0.0005, ** is p < 0.005, and * is p < 0.05).
Methods
Subjects
A total of 15 subjects participated in the experiment. Other details are as before. 
Stimuli
We tested a total of eight shapes, all containing only horizontal and vertical orientations (Figure 7C, x axis). The shapes varied in complexity from those containing only one line (oriented horizontal or vertical), two lines (L, T, plus), or three lines (step, H, C). All shapes were contained within a square box with sides measuring 0.57 dva with a fixed stroke width of 0.19 dva. 
Design
The target and distracters in a given search array were always the same shape short of changes in length or width. There were two main conditions: The target and distracters could have the same aspect ratio or different aspect ratios. In the same aspect ratio condition, the length and width of the shape were changed in two ways: both increased by 25% or both decreased by 25%. Thus the net change in length and width in this condition was 50%. In the different aspect ratio condition, we modified length and width in two ways: (a) length increased by 50% with no change in width or (b) length decreased by 10% and width by 40%. Thus, the aspect ratio in both cases changed by 50%, whereas the net change in length and width was 50% as in the same aspect ratio condition. In all there were four modifications for each shape (two each for the same and different aspect ratio conditions). 
Procedure
Subjects performed 32 correct trials for each shape (4 Modifications/Shape × 8 Repetitions, with target equally often on left and right), resulting in a total of 256 correct trials. Other experimental procedures are as before. We observed no significant differences in accuracy across conditions. 
Results
Subjects performed searches in which the target either did or did not differ from the distracters in aspect ratio, while keeping the net change in length and width constant. An example search display is shown in Figure 7B. It can be seen that, when the net change in length and width is held constant, the target with the same aspect ratio is harder to locate than the one with a different aspect ratio. The same trend can be seen for the average search times across all shapes for the two conditions (Figure 7C; average search times: 2.45 s for same, 2.15 s for different aspect ratio). To assess the significance of this effect, we performed an ANOVA on the search times with subject (eight levels), shape (eight levels), and aspect ratio (same vs different aspect ratio) as factors. We found a highly significant difference between the same and different aspect ratio conditions, F(1, 3600) = 43.65, p < 0.00005 for main effect of aspect ratio, F(14, 3600) = 33.46; F(7, 3600) = 7.58, p < 0.00005 for main effects of subject and shape; other effects were not significant i.e., p > 0.05. A post-hoc analysis on individual shapes revealed the effect to be present in all eight shapes and significant in six of them (p < 0.05 on main effect of ANOVA with subject and aspect ratio as factors; see Figure 7C). Because the same and different aspect ratio conditions involved exactly the same net change in length and width, the difference in search performance can only be attributed to aspect ratio and not to any other feature. We therefore conclude that aspect ratio influences visual search in a manner distinct from changes in length and width. 
General discussion
Here, we investigated how multiple features combine in visual search. To this end we characterized how searches for targets differing in multiple features relate to searches for targets differing along each of the individual features. The main finding is that reciprocal RTs in the multiple feature search is accurately predicted by a linear sum of the reciprocal RTs of the single features but not by models based on RT alone. Since reciprocal RT measures dissimilarity, our finding implies that dissimilarities in visual search sum linearly. This result was true for separable features such as intensity, length, and orientation of rectangular bars (Experiments 2 & 3). It was also true for integral features such as the length and width of a rectangle once aspect ratio was included as an additional feature (Experiment 4). Thus, features that appear integral can become separable on inclusion of additional features. In a separate experiment, we confirmed that aspect ratio influences search in a manner distinct from that expected from length and width alone (Experiment 5). Below we discuss the implications of these findings for visual search as well as for studies of perceptual dissimilarity. 
Relation to studies of visual search
Our results further our understanding of visual search in several ways. First, they elucidate the similarity relations that govern visual search. Search is generally thought to become hard if a target is similar to its distracters and easy if it becomes dissimilar (Duncan & Humphreys, 1989; Wolfe, Cave, & Franzel, 1989). Thus existing theories of search make no distinction between similarity and dissimilarity—and indeed there is no qualitative difference. However our results show that 1/RT provides a better description of multiple feature search than RT. Second, our results elucidate how features combine in visual search by ruling out race models where each feature activates separate accumulators. Instead, our results support co-activation models in which salience signals related to each feature combine before accumulating (Found, 1998; Krummenacher et al., 2001, 2002; Monnier, 2006; Zehetleitner et al., 2009). Our results extend previous findings by showing that the co-activation is strikingly linear. The relative importance of the features in the model were also consistent with subjects' verbal reports after the task. These findings are in general consistent with a dimension-weighting account of search in which each feature may weigh in differently depending on its intrinsic salience (Found & Muller, 1996; Muller & Krummenacher, 2006). Finally, we have demonstrated a quantitative relationship between search times and feature differences, to show that (a) reciprocal RT increases linearly with feature differences and that (b) search depends on relative differences in length and intensity but on absolute differences in orientation. 
Visual search depends not only on the dissimilarity between target and distracters, but also on distracter heterogeneity (Duncan & Humphreys, 1989). We have recently found that complex search for a target among multiple types of distracters can be explained using simpler searches involving identical distracters (Vighneshvel & Arun, 2013). In that study, distracter heterogeneity had a negative contribution to complex search, i.e., it reduced the net salience signal, making search harder—but it too combined linearly with target-distracter dissimilarities. This result suggests a general mechanism whereby the net salience signal underlying visual search is a linear sum of many types of saliency signals. This is consistent with the notion of a master saliency map that drives eye movements across the visual scene (Zehetleitner et al., 2009; Schall et al., 2011; Tollner, Zehetleitner, Krummenacher, & Muller, 2011). Our results indicate that multiple types of saliency maps combine linearly. 
Relation to studies of dissimilarity
The question of how dissimilarities combine lies at the heart of the reductionist approach to perception: If the dissimilarity between complex objects is related to dissimilarities between their features, then the whole can be understood in terms of the parts. This question has been studied extensively using dissimilarity ratings obtained from subjects when objects differ in single versus multiple features. Visual search has the obvious advantage of being an extremely natural task in which performance is objectively but implicitly linked to similarity. At the outset, subjective dissimilarity ratings need not have been related to dissimilarities in visual search since they involve different processes and mechanisms. Nonetheless, the fact that intensity, length, and orientation combine linearly in visual search as they do in subjective dissimilarity (Attneave, 1950; Hyman & Well, 1967; Ronacher, 1992) and that they are separable (Garner & Felfody, 1970; Felfody, 1974; Cheng & Pachella, 1984) suggests that the underlying visual representations may be similar. 
Separable versus integral dimensions
Our results elucidate the classical notion of separable and integral features. Separable features such as intensity, length, or orientation combine linearly in dissimilarity ratings, and we have shown this to be true in visual search. A number of other features have also been found to interact linearly (Found, 1998; Krummenacher et al., 2001, 2002). Integral features such as length and width of a rectangle combine in a Euclidean manner, and this too we have replicated in visual search. However we have gone further to show that a model that includes aspect ratio (along with length and width) not only outperforms the Euclidean model but also every combination of length, width, area, or aspect ratio tested. This was an unexpected and robust finding—it suggests that integral features can potentially become separable on including additional features and, conversely, that integral or nonlinearly interacting features potentially indicate the presence of an additional unknown feature. We speculate that in general, perceived dissimilarity, whether measured in visual search or using subjective ratings, always combines linearly. 
In general, the classification of features as being integral or separable has been questioned because it assumes the manipulated external feature space to be isomorphic to the internal psychological space (Lockhead & King, 1977). Our results show that the internal “visual search space” is not isomorphic to length and width alone even though they alone were modified externally. Instead, visual search space is driven by length, width, and aspect ratio. In our previous work, we have shown that internal visual search space can be qualitatively different from external parametric space (Vighneshvel & Arun, 2013). These findings underscore the importance of understanding how internal psychological space relates to external features and the perils of assuming them to be equivalent. 
Aspect ratio as a novel feature in visual search
It has been speculated that aspect ratio may be a feature that drives visual search (Wolfe & Horowitz, 2004). However to the best of our knowledge, its impact on visual search has never been conclusively demonstrated. It is difficult to establish aspect ratio as a feature because it cannot be manipulated independently of length and width. We were able to reveal the contribution of aspect ratio by manipulating it while controlling for changes in length and width. The discovery of aspect ratio as a feature is noteworthy not only because it confirms its role in search but also because it rules out a number of other possibilities that could have also played a role, notably Euclidean combinations of length and width and also rectangle area (Wender, 1971; Krantz & Tversky, 1975; Wiener-Ehrlich, 1978). In Experiment 5, we have demonstrated the contribution of aspect ratio in a model-free manner by showing that, when the net change in length and width is held constant, searches involving shapes different only in aspect ratio are easier than searches involving shapes with equal aspect ratio. 
Why does aspect ratio matter? We note that shapes differing only in length and width but with equal aspect ratio are essentially scaled versions of each other and are likely to be images of the same object from different distances. Thus, aspect ratio might serve as a useful low-level feature to distinguish between objects in natural vision. 
Conclusions
Taken together, our results suggest that features in visual search combine linearly and that this combination occurs prior to (rather than during) the accumulation stage. In the very least, this result holds for separable features such as intensity, length, or orientation and for integral features such as the length and width of a rectangle on including aspect ratio as an additional feature. In general, our results suggest that features in visual search combine linearly. 
Acknowledgments
We thank the anonymous reviewers for their comments, Jeremy Wolfe for helpful pointers to the literature, and Nivedita Rangarajan for assistance with preliminary experiments. This research was funded by a start-up grant from the Indian Institute of Science and an Intermediate Fellowship from the Wellcome Trust – DBT India Alliance (both to S. P. A.). 
Commercial relationships: none. 
Corresponding author: S. P. Arun. 
Email: sparun@cns.iisc.ernet.in. 
Address: Centre for Neuroscience, Indian Institute of Science, Bangalore, India. 
References
Arun S. P. (2012). Turning visual search time on its head. Vision Research, 74, 86–92. [CrossRef] [PubMed]
Attneave F. (1950). Dimensions of similarity. American Journal of Psychology, 63 (4), 516–556. [CrossRef] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Brown S. D. Heathcote A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57 (3), 153–178. [CrossRef] [PubMed]
Carpenter R. H. S. Reddi B. A. J. Anderson A. J. (2009). A simple two-stage model predicts response time distributions. Journal of Physiology, 587 (Pt 16), 4051–4062. [CrossRef] [PubMed]
Cheng P. Pachella R. (1984). A psychophysical and approach to dimensional and separability. Cognitive Psychology, 16, 279–304. [CrossRef] [PubMed]
Duncan J. Humphreys G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96 (3), 433–458. [CrossRef] [PubMed]
Dunn J. C. (1983). Spatial metrics of integral and separable dimensions. Journal of Experimental Psychology. Human Perception and Performance, 9 (2), 242–257. [CrossRef] [PubMed]
Eckstein M. P. Thomas J. Palmer J. Shimozaki S. S. (2000). A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays. Perception and Psychophysics, 62, 425–451. [CrossRef] [PubMed]
Felfody G. (1974). Repetition effects in choice reaction time and to multidimensional stimuli. Perception and Psychophysics, 15 (3), 453–459. [CrossRef]
Found A. (1998). Parallel coding of conjunctions in visual search. Perception and Psychophysics, 60 (7), 1117–1127. [CrossRef] [PubMed]
Found A. Muller H. (1996). Searching for unknown feature targets on more than one dimension: Investigating a “dimension-weighting” account. Perception and Psychophysics, 58 (1), 88–101. [CrossRef] [PubMed]
Garner W. Felfody G. (1970). Integrality of stimulus and dimensions in various and types of information processing. Cognitive Psychology, 1, 225–241. [CrossRef]
Gottwald R. Garner W. (1972). Effects of focusing strategy on speeded classification with grouping, filtering, and condensation tasks. Perception and Psychophysics, 11 (2), 179–182. [CrossRef]
Hyman R. Well A. (1967). Judgments of similarity and spatial models. Perception and Psychophysics, 2 (6), 233–248. [CrossRef]
Hyman R. Well A. (1968). Perceptual separability and spatial models. Perception and Psychophysics, 3 (3), 161–165. [CrossRef]
Krantz D. Tversky A. (1975). Similarity of rectangles—An analysis of subjective dimensions. Journal of Mathematical Psychology, 12, 4–34. [CrossRef]
Krummenacher J. Muller H. J. Heller D. (2001). Visual search for dimensionally redundant pop-out targets: Evidence for parallel-coactive processing of dimensions. Perception and Psychophysics, 63 (5), 901–917. [CrossRef] [PubMed]
Krummenacher J. Muller H. J. Heller D. (2002). Visual search for dimensionally redundant pop-out targets: Redundancy gains in compoound tasks. Visual Cognition, 9 (7), 801–837. [CrossRef]
Lockhead G. King M. (1977). Classifying integral and stimuli. Journal of Experimental Psychology: Human Perception and Performance, 3 (3), 436–443. [CrossRef] [PubMed]
McMahon D. B. T. Olson C. R. (2009). Linearly additive shape and color signals in monkey inferotemporal cortex. Journal of Neurophysiology, 101 (4), 1867–1875. [CrossRef] [PubMed]
Monnier P. (2006). Detection of multidimensional targets in visual search. Vision Research, 46 (24), 4083–4090. [CrossRef] [PubMed]
Mordkoff J. T. Yantis S. (1991). An interactive race model of divided attention. Journal of Experimental Psychology: Human Perception and Performance, 17 (2), 520–538. [CrossRef] [PubMed]
Mordkoff J. T. Yantis S. (1993). Dividing attention between color and shape: Evidence of coactivation. Perception and Psychophysics, 53 (4), 357–366. [CrossRef] [PubMed]
Muller H. Krummenacher J. (2006). Locus of dimension weighting: Preattentive or postselective? Visual Cognition, 14 (4), 490–513. [CrossRef]
Nakayama K. Martini P. (2011). Situating visual search. Vision Research, 51 (13), 1526–1537. [CrossRef] [PubMed]
Potts B. Melara R. Marks L. (1998). Circle size and diameter tilt: A new look at integrality and separability. Perception and Psychophysics, 60 (1), 101–112. [CrossRef] [PubMed]
Ronacher B. (1992). Pattern recognition and in honeybees: Multidimensional scaling reveals a city-block metric. Vision Research, 32 (10), 1837–1843. [CrossRef] [PubMed]
Schall J. D. Purcell B. A. Heitz R. P. Logan G. D. Palmeri T. J. (2011). Neural mechanisms of saccade target selection: Gated accumulator model of the visual-motor cascade. European Journal of Neuroscience, 33 (11), 1991–2002. [CrossRef] [PubMed]
Shepard R. (1964). Attention and the metric structure of the stimulus space. Journal of Mathematical Psychology, 1, 54–87. [CrossRef]
Shepard R. N. (1987). Towards a universal law of generalisation for psychological science. Science, 237, 1317–1323. [CrossRef] [PubMed]
Shimozaki S. S. Eckstein M. P. Abbey C. K. (2002). Stimulus information contaminates summation tests of independent neural representations of features. Journal of Vision, 2 (5): 1, 354–370, http://www.journalofvision.org/content/2/5/1, doi:10.1167/2.5.1. [PubMed] [Article] [PubMed]
Sripati A. P. Olson C. R. (2010). Global image dissimilarity in macaque inferotemporal cortex predicts human visual search efficiency. Journal of Neuroscience, 30 (4), 1258–1269. [CrossRef] [PubMed]
Tollner T. Zehetleitner M. Krummenacher J. Muller H. J. (2011). Perceptual basis of redundancy gains in visual pop-out search. Journal of Cognitive Neuroscience, 23 (1), 137–150. [CrossRef] [PubMed]
Torgerson W. (1965). Multidimensional scaling of similarity. Psychometrika, 30 (4), 379–393. [CrossRef] [PubMed]
Treisman A. M. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12 (1), 97–136. [CrossRef] [PubMed]
Tversky A. (1977). Features of similarity. Psychological Review, 84, 327–352. [CrossRef]
Tversky A. Gati I. (1982). Similarity, separability, and the triangle inequality. Psychological Review, 89 (2), 123–154. [CrossRef] [PubMed]
Vighneshvel T. Arun S. P. (2013). Does linear separability really matter? Complex visual search is explained by simple search. Journal of Vision, 13 (11): 10, 1–24, http://www.journalofvision.org/content/13/11/10, doi:10.1167/13.11.10. [PubMed] [Article]
Wender K. (1971). A test of independence of dimensions in multidimensional scaling. Perception and Psychophysics, 10 (1), 30–32. [CrossRef]
Wiener-Ehrlich W. (1978). Dimensional and metric structures in and multidimensional stimuli. Perception and Psychophysics, 24 (5), 399–414. [CrossRef] [PubMed]
Wolfe J. M. Cave K. R. Franzel S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15 (3), 419–433. [CrossRef] [PubMed]
Wolfe J. M. Horowitz T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews. Neuroscience, 5 (6), 495–501. [PubMed]
Zehetleitner M. Krummenacher J. Muller H. J. (2009). The detection of feature singletons defined in two dimensions is based on salience summation, rather than on serial exhaustive or interactive race architectures. Attention Perception Psychophysics, 71 (8), 1739–1759. [CrossRef]
Figure 1
 
Example displays with targets differing in multiple or single features from the distracters. (A) A target differing only in length, (B) only in orientation, and (C) in both length and orientation from the distracters (ΔLr = 0.47 and ΔO = 15°). Subjects had to press a key to indicate the side of the array (left or right) on which the target appeared. Average search times across subjects (mean ± SEM) are shown below the displays. For all example displays shown, the actual displays were white items against a black background. (D) Schematic model for visual search. In this model, a salience signal arises at every location in the visual field that accumulates with time. An oddball target is detected when the salience signal at that location reaches threshold. According to this simple model, the product of the salience signal and reaction time (RT) equals the threshold. Conversely, the reciprocal of RT is proportional to the salience signal. (E) In the race model, the multiple feature search sets off separate accumulators for the two features, and a response is produced when the first accumulator reaches threshold. Thus the reaction time is the minimum of the two individual reaction times. (F) In the co-activation model, salience signals for length and orientation sum together and the net salience signal then accumulates to a threshold.
Figure 1
 
Example displays with targets differing in multiple or single features from the distracters. (A) A target differing only in length, (B) only in orientation, and (C) in both length and orientation from the distracters (ΔLr = 0.47 and ΔO = 15°). Subjects had to press a key to indicate the side of the array (left or right) on which the target appeared. Average search times across subjects (mean ± SEM) are shown below the displays. For all example displays shown, the actual displays were white items against a black background. (D) Schematic model for visual search. In this model, a salience signal arises at every location in the visual field that accumulates with time. An oddball target is detected when the salience signal at that location reaches threshold. According to this simple model, the product of the salience signal and reaction time (RT) equals the threshold. Conversely, the reciprocal of RT is proportional to the salience signal. (E) In the race model, the multiple feature search sets off separate accumulators for the two features, and a response is produced when the first accumulator reaches threshold. Thus the reaction time is the minimum of the two individual reaction times. (F) In the co-activation model, salience signals for length and orientation sum together and the net salience signal then accumulates to a threshold.
Figure 2
 
Single feature variations (Experiment 1). (A) Search reaction time plotted as a function of absolute intensity difference between the target and distracters for the two baseline conditions (squares: Baseline 1; circles: Baseline 2). Points represent the mean search reaction time for each condition with error bars depicting the SEM across trials. (B) Search reaction time as a function of relative difference in intensity for the two baseline conditions. It can be seen that search times are virtually the same despite changing baseline, suggesting that search depends on relative rather than absolute intensity differences. (C) Reciprocal of search time plotted against the relative intensity difference for the two baselines showing a linear relationship. The slope and intercept of the regression line, as well as the correlation coefficient (r = 0.97), are shown. Asterisks indicate statistical significance (**** is p < 0.00005). (D)–(F) Similar plots for length, showing that search depends on relative rather than absolute length differences. (G)–(I) Similar plots for orientation showing here that search depends on absolute rather than relative orientation differences. In all cases, reciprocal RT was linear with feature differences.
Figure 2
 
Single feature variations (Experiment 1). (A) Search reaction time plotted as a function of absolute intensity difference between the target and distracters for the two baseline conditions (squares: Baseline 1; circles: Baseline 2). Points represent the mean search reaction time for each condition with error bars depicting the SEM across trials. (B) Search reaction time as a function of relative difference in intensity for the two baseline conditions. It can be seen that search times are virtually the same despite changing baseline, suggesting that search depends on relative rather than absolute intensity differences. (C) Reciprocal of search time plotted against the relative intensity difference for the two baselines showing a linear relationship. The slope and intercept of the regression line, as well as the correlation coefficient (r = 0.97), are shown. Asterisks indicate statistical significance (**** is p < 0.00005). (D)–(F) Similar plots for length, showing that search depends on relative rather than absolute length differences. (G)–(I) Similar plots for orientation showing here that search depends on absolute rather than relative orientation differences. In all cases, reciprocal RT was linear with feature differences.
Figure 3
 
Two-feature searches (Experiment 2). (A) Search times plotted against relative length difference for different levels of relative intensity differences (denoted as ΔIr). The red line depicts the case when the target differs only in length but not in intensity (ΔIr = 0) from the distracters. The other lines represents search times when the target differs in intensity by a fixed level (ΔIr varying from 0.30 to 0.73) and relative length difference (ΔLr) is varied from zero to 0.73. Error bars represent the SEM calculated across trials. It can be seen that both features combine in search. (B) and (C): Similar plots for the intensity-orientation and length-orientation tasks. (D)–(F) Average accuracy for each condition in the three experiments. Error bars represent SEM across subjects. (G) Observed reciprocal RT for the multiple feature searches in the intensity-length task plotted against the reciprocal RT predicted as a linear combination of the individual feature 1/RTs (i.e., using searches in which ΔIr = 0 or ΔLr = 0). The correlation coefficient is depicted at the top left. Asterisks represent statistical significance with conventions as before. (H) and (I): Observed versus predicted reciprocal RTs in the intensity-orientation and the length-orientation tasks, respectively.
Figure 3
 
Two-feature searches (Experiment 2). (A) Search times plotted against relative length difference for different levels of relative intensity differences (denoted as ΔIr). The red line depicts the case when the target differs only in length but not in intensity (ΔIr = 0) from the distracters. The other lines represents search times when the target differs in intensity by a fixed level (ΔIr varying from 0.30 to 0.73) and relative length difference (ΔLr) is varied from zero to 0.73. Error bars represent the SEM calculated across trials. It can be seen that both features combine in search. (B) and (C): Similar plots for the intensity-orientation and length-orientation tasks. (D)–(F) Average accuracy for each condition in the three experiments. Error bars represent SEM across subjects. (G) Observed reciprocal RT for the multiple feature searches in the intensity-length task plotted against the reciprocal RT predicted as a linear combination of the individual feature 1/RTs (i.e., using searches in which ΔIr = 0 or ΔLr = 0). The correlation coefficient is depicted at the top left. Asterisks represent statistical significance with conventions as before. (H) and (I): Observed versus predicted reciprocal RTs in the intensity-orientation and the length-orientation tasks, respectively.
Figure 4
 
Three-feature searches (Experiment 3). Example search displays in which the target differed in (A) intensity, (B) length, (C) orientation, and (D) all three features (ΔIr = 0.3, ΔLr = 0.38, ΔO = 24°). Target intensity in the display is only approximate and does not reflect the true intensity used in the experiment. Average search reaction times (mean ± SEM) are shown below. It can be seen that the three features combine to produce easier search in (D). (E) Observed versus predicted reciprocal RTs for the additive 1/RT model.
Figure 4
 
Three-feature searches (Experiment 3). Example search displays in which the target differed in (A) intensity, (B) length, (C) orientation, and (D) all three features (ΔIr = 0.3, ΔLr = 0.38, ΔO = 24°). Target intensity in the display is only approximate and does not reflect the true intensity used in the experiment. Average search reaction times (mean ± SEM) are shown below. It can be seen that the three features combine to produce easier search in (D). (E) Observed versus predicted reciprocal RTs for the additive 1/RT model.
Figure 5
 
Rectangle length and width (Experiment 4). Example search displays in which the target differs from the distracters (A) only in length and (B) only in width and (C) in both length and width (ΔLr = 0.3 and ΔWr = 0.3). It can be seen that both length and width contribute to the overall search in (C). (D) Observed versus predicted reciprocal RTs for the best model (additive 1/RT model with length, width, and aspect ratio). (E) Quality of fit for some of the models tested in this experiment, as measured using the Akaike's Information Criterion (AICc) (for the full set of models, see Table 3). Larger values of AICc indicate better fits (see text). Acronyms in each model description refer to the quantities being used for prediction by the model (L is length; W is width, A is area, S is aspect ratio). The best model (additive LWS) performed significantly better than all other models. For a subset of comparisons, the statistical significance of the AICc comparisons is indicated beside the bars using asterisks. Asterisks beside correlation coefficients represent their statistical significance. * is p < 0.05, **** is p < 0.00005.
Figure 5
 
Rectangle length and width (Experiment 4). Example search displays in which the target differs from the distracters (A) only in length and (B) only in width and (C) in both length and width (ΔLr = 0.3 and ΔWr = 0.3). It can be seen that both length and width contribute to the overall search in (C). (D) Observed versus predicted reciprocal RTs for the best model (additive 1/RT model with length, width, and aspect ratio). (E) Quality of fit for some of the models tested in this experiment, as measured using the Akaike's Information Criterion (AICc) (for the full set of models, see Table 3). Larger values of AICc indicate better fits (see text). Acronyms in each model description refer to the quantities being used for prediction by the model (L is length; W is width, A is area, S is aspect ratio). The best model (additive LWS) performed significantly better than all other models. For a subset of comparisons, the statistical significance of the AICc comparisons is indicated beside the bars using asterisks. Asterisks beside correlation coefficients represent their statistical significance. * is p < 0.05, **** is p < 0.00005.
Figure 6
 
Example searches illustrating the role of aspect ratio in visual search (Experiment 4). Three searches are shown in which the net dissimilarity due to relative length (ΔLr) and relative width (ΔWr) are approximately equal. In (A) and (B), the target differs in length and width as well as aspect ratio. In (C) the target differs in length and width but not in aspect ratio from the distracters. Searches (A) and (B) are comparable in difficulty but search (C) is slightly harder. This pattern cannot be explained by length and width. It can be explained instead by the fact that the target in (C), despite having the same net salience for length and width, has no aspect ratio difference from the distracters.
Figure 6
 
Example searches illustrating the role of aspect ratio in visual search (Experiment 4). Three searches are shown in which the net dissimilarity due to relative length (ΔLr) and relative width (ΔWr) are approximately equal. In (A) and (B), the target differs in length and width as well as aspect ratio. In (C) the target differs in length and width but not in aspect ratio from the distracters. Searches (A) and (B) are comparable in difficulty but search (C) is slightly harder. This pattern cannot be explained by length and width. It can be explained instead by the fact that the target in (C), despite having the same net salience for length and width, has no aspect ratio difference from the distracters.
Figure 7
 
Aspect ratio drives visual search (Experiment 5). (A) Example objects with same aspect ratio and different aspect ratio. A giraffe is not suitable because changing its aspect ratio also changes its neck orientation as well as other features. The only class of objects where changes in aspect ratio do not change any other feature apart from length or width are those that contain only horizontal and vertical orientations. Note that the net change in length and width is 50% for both same and different aspect ratio images. (B) Example search displays in which the target has the same aspect ratio (left) or different aspect ratio (right). In both cases the net change in length and width is the same, yet subjects took longer to find the target in the same aspect ratio displays. This difference in search time can only be explained if aspect ratio drives visual search. Average search times (mean ± SEM across subjects) are depicted below each display. (C) Average search reaction times (error bars represent SEM) for the same (green) and different (blue) aspect ratio conditions for individual shapes, together with the ensemble averages (extreme right). Asterisks represent the statistical significance of the main effect of aspect ratio (same versus different) as assessed using an ANOVA on search times (*** is p < 0.0005, ** is p < 0.005, and * is p < 0.05).
Figure 7
 
Aspect ratio drives visual search (Experiment 5). (A) Example objects with same aspect ratio and different aspect ratio. A giraffe is not suitable because changing its aspect ratio also changes its neck orientation as well as other features. The only class of objects where changes in aspect ratio do not change any other feature apart from length or width are those that contain only horizontal and vertical orientations. Note that the net change in length and width is 50% for both same and different aspect ratio images. (B) Example search displays in which the target has the same aspect ratio (left) or different aspect ratio (right). In both cases the net change in length and width is the same, yet subjects took longer to find the target in the same aspect ratio displays. This difference in search time can only be explained if aspect ratio drives visual search. Average search times (mean ± SEM across subjects) are depicted below each display. (C) Average search reaction times (error bars represent SEM) for the same (green) and different (blue) aspect ratio conditions for individual shapes, together with the ensemble averages (extreme right). Asterisks represent the statistical significance of the main effect of aspect ratio (same versus different) as assessed using an ANOVA on search times (*** is p < 0.0005, ** is p < 0.005, and * is p < 0.05).
Table 1
 
Summary of model performance in Experiments 2 and 3. In the formulae, RT12, RT1, and RT2 (or d12, d1, d2) are search times (or reciprocal RTs) in the multiple feature condition and in the two single-feature conditions, respectively. Formulas are shown only for the two-feature conditions in Experiment 2 for simplicity but included the third single-feature conditions (RT3 or d3) in Experiment 3. Model performance was measured using the correlation coefficient between the model predictions and the data (RT or 1/RT as applicable). To account for differences in the number of parameters between models and to compare models on the same scale, we calculated a quality-of-fit measure (AICc—see text) using the observed and predicted reciprocal RTs for each model. Larger AICc values represent better fits. The best model is highlighted in bold—the additive 1/RT model. Asterisks represent statistical significance of the comparison between each model with the best model (obtained using a Fisher's z test for correlation coefficients and using an unpaired t test on bootstrap samples for AICc; p < 0.05).
Table 1
 
Summary of model performance in Experiments 2 and 3. In the formulae, RT12, RT1, and RT2 (or d12, d1, d2) are search times (or reciprocal RTs) in the multiple feature condition and in the two single-feature conditions, respectively. Formulas are shown only for the two-feature conditions in Experiment 2 for simplicity but included the third single-feature conditions (RT3 or d3) in Experiment 3. Model performance was measured using the correlation coefficient between the model predictions and the data (RT or 1/RT as applicable). To account for differences in the number of parameters between models and to compare models on the same scale, we calculated a quality-of-fit measure (AICc—see text) using the observed and predicted reciprocal RTs for each model. Larger AICc values represent better fits. The best model is highlighted in bold—the additive 1/RT model. Asterisks represent statistical significance of the comparison between each model with the best model (obtained using a Fisher's z test for correlation coefficients and using an unpaired t test on bootstrap samples for AICc; p < 0.05).
Table 2
 
Best-fitting model coefficients for Experiments 2 and 3. For each model, the best-fitting coefficients are found by a linear regression between the multiple feature search and the corresponding individual feature conditions (using either RT or 1/RT as the measure).
Table 2
 
Best-fitting model coefficients for Experiments 2 and 3. For each model, the best-fitting coefficients are found by a linear regression between the multiple feature search and the corresponding individual feature conditions (using either RT or 1/RT as the measure).
Table 3
 
Model performance in Experiment 4. Each model depicts a particular relationship between observed reciprocal RT (d12) and relative feature differences in length (L), width (W), aspect ratio (S), and/or area (A). Aspect ratio was defined as the ratio of length to width, and area was their product. This definition of aspect ratio gave better model performance than defining aspect ratio as width/length. The best model, highlighted in bold, was a model in which relative differences in length, width, and aspect ratio combined linearly. Asterisks beside the correlations and AICc values depict the statistical significance of the comparison between the best model and each model. For each model, the best-fitting model coefficients are given in the equations.
Table 3
 
Model performance in Experiment 4. Each model depicts a particular relationship between observed reciprocal RT (d12) and relative feature differences in length (L), width (W), aspect ratio (S), and/or area (A). Aspect ratio was defined as the ratio of length to width, and area was their product. This definition of aspect ratio gave better model performance than defining aspect ratio as width/length. The best model, highlighted in bold, was a model in which relative differences in length, width, and aspect ratio combined linearly. Asterisks beside the correlations and AICc values depict the statistical significance of the comparison between the best model and each model. For each model, the best-fitting model coefficients are given in the equations.
Sl. # Model / Description Correlation With Observed 1/RT Quality of Fit (AICc)
1. Additive LW / d12 = 0.5L + 0.35W + 0.11 0.93 799*
2. Euclidean LW / d122 = 0.61L2 + 0.41W2 + 0.006 0.91* 827*
3. Interaction LW / d12 = 0.65L + 0.51W + 0.34LW + 0.04 0.94 807*
4. Additive LWS / 0.95 834
5. Additive LWA / d12 = 0.27L + 0.12W + 0.27A + 0.09 0.93 797*
6. Additive AS / d12 = 0.51A + 0.13S + 0.02 0.93 798*
7. Euclidean AS / d122 = 0.31A2 + 0.17S2 – 0.003 0.89* 795*
8. Interaction AS / d12 = 0.55A + 0.24S – 0.14AS – 0.008 0.93 798*
9. A only / d12 = 0.49A + 0.08 0.91* 775*
10. S only / d12 = −0.05S + 0.48 0.07* 530*
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×