Open Access
Article  |   July 2022
Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks
Author Affiliations
  • Angus F. Chapman
    Department of Psychology, University of California, San Diego, La Jolla, CA, USA
    [email protected]
  • Viola S. Störmer
    Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA
    [email protected]
Journal of Vision July 2022, Vol.22, 4. doi:https://doi.org/10.1167/jov.22.8.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angus F. Chapman, Viola S. Störmer; Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks. Journal of Vision 2022;22(8):4. https://doi.org/10.1167/jov.22.8.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks—visual search and sustained attention—we investigated how target-distractor similarity impacts feature-based attentional selection. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (response times and dʹ, respectively) plateauing at medium target-distractor distances (40°–50° around a luminance-matched color wheel). In contrast, visual search efficiency, as measured by search slopes, was affected by a much more narrow range of similarity levels (10°–20°). We assessed the relationship between target-distractor similarity and attentional performance using both a stimulus-based and psychologically-based measure of similarity and found this nonlinear relationship in both cases. However, psychological similarity accounted for some of the nonlinearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space over which attention operates. Broadly, the nonlinear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.

Introduction
To prioritize the processing of relevant visual information, we can direct attention toward a specific spatial location or visual feature, such as a specific motion direction or color (Carrasco, 2011). Many theories have emphasized that attentional selection is based not only on aspects of the target, but that the visual surroundings, such as distractor items, influence selection (Duncan & Humphreys, 1989; Geng & Witkowski, 2019; Lee & Geng, 2020; Wolfe & Horowitz, 2004; Yu & Geng, 2019). One particularly important dimension is the similarity between targets and nontargets: selection is easiest when targets are very different from distractors, and increases in difficulty as they are made more similar to one another (Duncan & Humphreys, 1989; Pashler, 1987; Treisman & Gelade, 1980; Vighneshvel & Arun, 2013), which is presumably driven by the degree of representational overlap between target and distractor features. Thus, to understand attentional selection of visual features, and to develop and constrain models of attention, it is important to characterize target-distractor similarities across a wide range and well-sampled distribution of feature values and to assess their influence on attentional performance.1 
Target-distractor similarity in visual search
Research into the relationship between attention and target-distractor similarity has predominantly relied on visual search paradigms. It has been argued that similarity is a major factor that determines the efficiency of search: when targets and distractors are adequately distinct from one another, search occurs rapidly in parallel, whereas when similarity is high a more thorough, serial search is required to identify the target (Buetti, Cronin, Madison, Wang, & Lleras, 2016; Itti & Koch, 2000; Rosenholtz, Huang, Raj, Balas, & Ilie, 2012; Treisman & Gelade, 1980; Wolfe, 1994). Likewise, attentional engagement theory (Duncan & Humphreys, 1992, 1989) proposes a “search surface,” with search efficiency monotonically increasing—and response times decreasing—as the similarity between targets and non-targets decreases2. Consistent with this, research has shown that feature similarity is coded in visual cortex, because the response of visual neurons changes systematically as a function of the similarity between a current stimulus and that neuron's preferred feature (Martinez-Trujillo & Treue, 2004; Treue & Martinez-Trujillo, 1999). Additionally, neural recordings have demonstrated that attention enhances responses to target features (Andersen, Hillyard, & Müller, 2008; Müller, Andersen, Trujillo, Valdés-Sosa, Malinowski, & Hillyard, 2006) and can even suppress responses to distractor features that are similar to the target (Störmer & Alvarez, 2014). Thus feature-based attention—and visual search performance specifically—depend on the ability for the visual system to separate target and distractor representations to enable efficient target selection. 
Despite these theories clearly articulating the necessity of understanding the similarity between target and distractor stimuli, few studies have attempted to systematically quantify the relationship between similarity and attentional performance with high resolutions. Of the literature that has measured visual search performance while manipulating the similarity between items, most rely on qualitative or categorical distinctions, such as between color or shape categories (Alexander & Zelinsky, 2012; Becker, Folk, & Remington, 2013; Buetti, Xu, & Lleras, 2019; Lleras, Wang, Madison, & Buetti, 2019; Ng, Buetti, Patel, & Lleras, 2021; Reijnen, Wallach, Stöcklin, Kassuba, & Opwis, 2007). A few studies have assessed a broader range of quantified feature values to be able to describe the impact of target-distracter similarity on performance in visual search tasks. For example, Nagy and colleagues (Nagy & Cone, 1996; Nagy & Sanchez, 1990) measured search time as a function of the color difference between target and distractor items, finding that performance improved log-linearly for more similar items, before plateauing at higher levels of dissimilarity. Another study assessed visual search performance as a function of target-distractor similarity using orientation and found that search reaction times were best described as a sigmoid function of the orientation difference between target and distractor stimuli up until 45° of orientation difference, after which performance plateaued (Arun, 2012). One study in pigeons measured visual search performance using stimuli that differed in terms of shape and size and found that pigeons’ search speed was well predicted as an exponential decay function of the similarity between target and distractor array items (Blough, 1988). However, the generalizability of each of these findings is limited, given that the number of distractors remained constant. Only one study, to our knowledge, has measured search slopes as a function of target-distractor similarity: Wolfe, Klempen, and Shulman (1999) found that, in addition to faster search response times, search slopes concurrently decreased as a function of decreasing orientation similarity, suggesting that both measures might index the efficiency of search under manipulations of target-distractor similarity. Together, these studies suggest that search performance follows a nonlinear, roughly exponential function of the similarity between targets and distractors. 
Feature similarity in sustained attention tasks
Other researchers have studied feature-based attention using different paradigms, particularly ones which require subjects to select a target feature among spatially intermingled nontarget features, and sustain attention to that feature for an extended period of time (Andersen et al., 2008; Martinez-Trujillo & Treue, 2004; Sàenz, Buraĉas, & Boynton, 2003). The main differences between visual search tasks and these sustained feature-based attention tasks is that while both depend on selecting targets based on visual features, visual search also contains a spatial component where attention is directed to the location of the target feature once it is detected. Thus, visual search performance is also a function of the ability to use those features to guide spatial attention to the target item (Andersen, Müller, & Hillyard, 2009; Shih & Sperling, 1996). Thus, sustained attention tasks are useful to study feature-based selection processes in isolation from the spatial components of attention that are present during visual search, and have been instrumental in providing evidence for the spatially global nature of feature-based attention (Chapman & Störmer, 2021; Sàenz et al., 2003), as well as the neural effects of selecting individual features independently of location (Andersen et al., 2008; Andersen, Hillyard, & Müller, 2013; Sàenz, Buraĉas, & Boynton, 2002; Serences & Boynton, 2007; Störmer & Alvarez, 2014). To our knowledge, there have been no attempts to systematically evaluate the effects of target-distractor similarity on performance in sustained attention tasks and to compare variation in performance to visual search tasks. In fact, visual search and sustained attention tasks are largely used by separate groups of researchers despite the same goal of understanding and characterizing feature-based attention (search: e.g., Buetti et al., 2016; Duncan & Humphreys, 1989; Wolfe, Klempen, & Shulman, 1999; etc.; sustained attention: e.g., Andersen et al., 2013; Martinez-Trujillo & Treue, 2004; Sàenz et al., 2003; etc.), Thus it is critical to understand how performance in these tasks is related and potentially predicted by target-distractor similarities to help bridge theories and findings across these two literatures. 
Stimulus-based and psychological similarity
Critically, previous studies (in particular visual search studies that varied target-distractor similarity, e.g., Nagy & Sanchez (1990); Arun, 2012) operationalized target-distractor similarities in terms of the stimulus characteristics of a given feature space and assumed a linear similarity function (measured in e.g., angular distance in orientation or color space; hereafter referred to as “stimulus-based similarity”). However, this misses a critical issue: namely that similarity in a given stimulus space does not linearly map on to the representational similarity in our mind, which is globally exponential (Shepard, 1987). To demonstrate this, consider the color wheel in Figure 1D and the associated psychological dissimilarity functions. The graph shows psychological dissimilarity between colors as a function of angular distance on a color wheel. The difference in psychological dissimilarity between two colors 120° and 180° around the color wheel is negligible, whereas the same 60° change from 30° to 90°, for example, produces a massive change in psychological dissimilarity. Thus psychological dissimilarity of features does not scale linearly with distance along a color wheel, but is instead exponential with respect to the target color in mind (see also Schurgin, Wixted, & Brady, 2020). Indeed, a rich body of work suggests that this psychological similarity, defined based on psychophysical judgments from observers rather than on the stimulus space itself, captures aspects of performance across a wide range of tasks (Shepard, 1987; Sims, 2018). In particular, Shepard's (1987) Universal Law of Generalization predicts that the likelihood of a behavior being generalized from one stimulus to another is an exponential function of the psychological similarity between them. In the case of visual search, for example, the generalization in question is whether a distractor stimulus is falsely “detected” as the target. To date, no research has directly and independently assessed the relationship between psychological similarity and attentional selection. However, because attention depends on the representational organization in sensory cortices, it is critical to test how attentional selection is constrained by psychological similarity, and not just similarity in stimulus space. For example, it may be the case that psychological similarity—commonly described as a nonlinear, exponential function—can in part explain the pattern of performance across different target-distractor distances, which often also appears to follow a roughly exponential function. 
Figure 1.
 
(A) Example of a visual search trial with a target stimulus among seven distractors 180° distant on the color wheel. Stimuli and text not to scale. (B) Example of a quad similarity task trial. Participants were instructed to select which pair (top or bottom) were least similar in color. (C) Example of triad similarity task trial. Participants were instructed to select which item (left or right) was most similar in color to the center stimulus. (D) Psychological dissimilarity estimates from MLDS for Experiments 1a, 2 (online, quad similarity task), and 3 (in-lab, triad similarity task). The CIELab color wheel used in the experiments is presented alongside this function. For all experiments, psychological dissimilarity reached 50% at approximately 30° around the color wheel.
Figure 1.
 
(A) Example of a visual search trial with a target stimulus among seven distractors 180° distant on the color wheel. Stimuli and text not to scale. (B) Example of a quad similarity task trial. Participants were instructed to select which pair (top or bottom) were least similar in color. (C) Example of triad similarity task trial. Participants were instructed to select which item (left or right) was most similar in color to the center stimulus. (D) Psychological dissimilarity estimates from MLDS for Experiments 1a, 2 (online, quad similarity task), and 3 (in-lab, triad similarity task). The CIELab color wheel used in the experiments is presented alongside this function. For all experiments, psychological dissimilarity reached 50% at approximately 30° around the color wheel.
The current study
Thus in the current study we quantify how both stimulus similarity and psychological similarity—measured in an independent psychophysical task—affect the efficiency of feature-based attentional selection. The main question is whether nonlinearities in psychological similarity can, at least in part, explain behavioral nonlinearities observed in attention tasks. If the efficiency of selection is driven by the perceived similarity between items that compete for attention, plotting performance as a function of psychological similarity, rather than stimulus-based similarity, should better capture the non-linear relationship. Critically, we assess attentional selection across two different tasks commonly used: visual search and sustained feature-based attention, to generalize our results and help bridge findings across different studies on feature-based attention that have used one task or the other. We use color as a test case and systematically vary the stimulus-based similarity between target and distractors as defined in degrees around a roughly perceptually uniform color wheel (CIELab), commonly used in attention and working memory studies, while psychological similarity is empirically measured using the same color space. Following previous work (Schurgin et al., 2020), we assess psychological similarity with independent perceptual tasks, using maximum-likelihood difference scaling to model the similarity structure among features (Maloney & Yang, 2003). To measure attentional selection, we use both visual search (Experiments 1a, 1b, 1c, 2) and a sustained feature-based attention task (Experiment 3). By comparing how target-distractor similarities affect performance across different tasks, we can test theories of attention more robustly, allowing for a more generalizable understanding of the effects of feature similarity on attentional selection. 
Experiment 1a: How is visual search performance affected by feature similarity?
In Experiment 1a, we assessed the effects of stimulus-based and psychological target-distractor similarity on visual search response times. On each trial, participants saw a circular array of 8 colored circles, and had to respond as quickly as possible when they detected the target color, which was previewed at the beginning of the trial. All colors were sampled from a circular color space (Suchow, Brady, Fougnie, & Alvarez, 2013), and the similarity between target and distractor items was manipulated by varying the angular distance between the colors. 
Method
Participants
Sixty undergraduate students from the University of California, San Diego subject pool participated in this experiment for course credit. We excluded eight participants whose average accuracy was below 70% in either task, leaving a final sample of 52 participants (44 women, six men, two did not report gender), aged 18 to 28 (M = 20.6 ± 1.6 years). This sample size was determined based on the size of the effects observed in previous studies (Becker et al., 2013; Buetti et al., 2016; Ng et al., 2021), accounting for additional variance introduced by online data collection, and provides a posteriori power at 80% to detect a significant effect of \(\eta _p^2\) >.219. For all experiments, participants gave informed consent before starting the experiment as approved by the Institutional Review Board at UC San Diego. 
Stimuli
Participants completed the experiment online on their own personal computer. Stimulus sizes are given in pixels, given that we had limited control over the display size and viewing distance. However, the display size was restricted to a minimum of 800 × 600 pixels, and participants were instructed to complete the experiment in full screen. 
All colors were selected from a set of 360 equally spaced equiluminant colors in the CIELab color space, drawn from a circle with radius 49 units, centered at L = 54, a = 21.5, b = 11.5. The visual search array consisted of eight circles (80px diameter) arranged evenly in a circle centered on fixation (10px diameter filled black dot). Each search array item was positioned 260px from fixation. On each trial, the target color was selected randomly from the full set of colors, and the distractor was chosen relative to the target based on the experimental condition (10°, 20°, 30°, 40°, 50°, 60°, 90°, or 180° from the target), therefore varying the target-distractor similarity. Distractor colors were equally often chosen by rotating the color wheel clockwise and counterclockwise from the target position. For the psychological similarity measurement, two pairs of colored circles (140px in diameter) were shown on the top and bottom of the display, and color similarity between them was varied systematically. 
Procedure
All participants completed the experiment online. The order of the two tasks was determined randomly at the time the experiment was loaded. In the visual search task (Figure 1A), participants searched for one pre-cued target item among seven distractor items. At the beginning of each trial, blank placeholders (outlined circles with no color) were shown at each of the eight array positions, and the upcoming target color was cued by presenting a colored circle at fixation 800 ms. Immediately after the cue disappeared, each search array location was filled with color (seven distractor-colored items; one target-colored item). The position of the target item was randomly determined on each trial. Participants were instructed to locate the target-colored item as quickly as possible and press the space bar as soon as they had found it. As soon as they responded, the color of each search item was removed (leaving only the placeholders), and participants had to click on the location at which they saw the target. Participants completed 384 search trials, consisting of 48 trials of each target-distractor distance level in a random order. 
To measure psychological similarity between targets and distractors, participants also completed a color quad judgment task (Figure 1B; Maloney & Yang, 2003; Schurgin et al., 2020). Participants were shown two pairs of colored circles, one pair on the upper half of the display, one on the lower half. On each trial, participants were instructed to indicate which pair was least similar in color (“t” for the top pair, “b” for the bottom pair). All items remained on the screen until a response was made. For each pair, the difference between their colors was separately manipulated across trials (0°, 10°, 20°, 30°, 40°, 50°, 60°, 90°, 120°, or 180°; sampled without replacement on each trial). On each trial, the color of one item was randomly drawn from the color wheel, thus determining its paired item's color. The color of the other pair was determined by centering them on the other side of the color wheel (180° away), such that each pair spanned a distinct section of the color wheel. The pairs were randomly assigned to the top and bottom of the display, and each item in the pair was randomly assigned to be presented on the left or right. Participants completed 405 trials of the similarity task, consisting of 9 repetitions of each possible combination of color difference of the top and bottom pairs. 
Data Analysis
Data were analyzed using R (Version 4.1.0; R Core Team, 2021) with the “tidyverse” package (Wickham, Averick, Bryan, Chang, McGowan, François, Grolemund, Hayes, Henry, Hester, Kuhn, 2019). 
Quad color similarity task
To calculate the psychological similarity function, we used the Maximum Likelihood Difference Scaling method (Maloney & Yang, 2003; Knoblauch & Maloney, 2012). This method finds scaled values for similarity across the colors tested that best predict a participant's accuracy in the quad task. Psychological similarity functions were fit for each subject using GLMs (Knoblauch & Maloney, 2012) and scaled such that dissimilarity was 0 at 0° (the target color) and 1 at 180° (the most dissimilar color on the color wheel). Note that this scaling is done to bring participants onto a standard scale, as GLM estimates vary significantly based on average accuracy in the similarity task. However, psychological dissimilarity estimates show consistent scaling regardless of what maximally dissimilar stimulus pairs are presented. 
Visual search task
To assess the effects of target-distractor similarity, we conducted a repeated-measures analysis of variance (ANOVA) on the natural logarithm of visual search response time (RT), to better meet the normality assumptions of the general linear model. RTs were calculated from speeded responses to target detection, and they were filtered to only exclude incorrect identifications of the target location (4.6% of trials) and RTs slower than 10 seconds or faster than 200 ms (3.55% of trials). Follow-up pairwise comparisons were conducted using false-discovery rate (FDR) correction (α < .05). 
Modeling of exponential functions
In addition to standard repeated-measures analyses, we also fit untransformed individual-subject RT data using exponential functions. These functions took the form RT = AeL × distance + F, where the three parameters A, L, and F correspond to the model intercept (RT at a distance of 0, adjusted for floor), the slope/steepness of the exponential function, and floor RT, respectively. For distance, we used both color distance in angles around the color wheel and psychological color similarity, estimated per subject from the quad similarity task (both scaled so that distance was between 0-1). Before modeling, RTs were filtered between 200 and 10000 ms and for correct trials and were scaled using z-scores (we reversed this scaling post-modeling for all visualizations). Models were fit using the function nlmer from the R package “lme4” (v1.1-26; Bates, Maechler, Bolker, & Walker, 2015), and initial models included random effects for each model parameter per subject. For Experiment 1, the final model excluded the random effect of L, because it was highly correlated with the other two random effects. These model fits help visualizing the relationship between search performance and our two similarity metrics. 
Results
Performance on the similarity judgement task was generally very high (M = 85.4%, SD = 5.6). We used maximum-likelihood difference scaling (MLDS) to transform responses on this task into an estimate of psychological dissimilarity, representing participants’ perceived dissimilarity between colors as a function of their distance around the color wheel. As expected, psychological dissimilarity gradually increased around the color wheel and followed a similar non-linear, roughly exponential shape as has been previously reported (Schurgin et al., 2020). In particular, shown in Figure 1D, psychological dissimilarity reached 50% of maximum at a color distance of only 30° (i.e., 16.7% of the maximum distance around the color wheel). In other words, a target color (at 0°) relative to a color 30° apart is equally dissimilar as colors 30° and 180° from the target are, because they cover equal distances on the y-axis. 
In the visual search task, there was a main effect of target-distractor distance on RT, F(7,357) = 199.93, p < .001, \(\eta _p^2\) = .797, demonstrated in Figure 2A. RTs declined steeply within the first few target-distractor distances (i.e., 15° vs. 30°) and continued to decrease with greater distance between target and distractor color, until they flattened out at about 60° distance on the color wheel, as evidenced by significant pairwise comparisons of distances 10–40° compared to all later distances, all p < 0.003. There was no difference in RT for comparisons between 50° to 180°, all p > 0.117 (except that RTs were significantly faster at 60° than 50°, p = 0.039). Thus search RTs were strongly influenced by target-distractor distance around the color wheel, but mainly for nearby distances. 
Figure 2.
 
Mean visual search response times in Experiment 1a as a function of (A) stimulus-based target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Error bars are within-subject SEM, whereas red dashed lines correspond to exponential model fits.
Figure 2.
 
Mean visual search response times in Experiment 1a as a function of (A) stimulus-based target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Error bars are within-subject SEM, whereas red dashed lines correspond to exponential model fits.
Interestingly, although search times flattened out completely at a color distance of 60°, the psychological dissimilarity function only reached 75% by this distance (Figure 1D), pointing to a non-linear relationship between psychological similarity and visual search performance. This non-linearity was confirmed by plotting visual search RTs against participants’ psychological dissimilarity estimates (Figure 2B). Because participants have different psychological similarity estimates for each of the assessed color distances, RTs cannot be directly compared across fixed similarity levels as when using stimulus distance. Following Blough (1988), we fit an exponential model to our data, separately for target-distractor distance and psychological dissimilarity. Exponential functions fit the data well, both at the group (dashed lines in Figures 2A and 2B) and individual subject level (see Supplementary Figure S1), suggesting that visual search performance was well described as an exponential function of stimulus distance. Consistent with the non-linear relationship between target-distractor distance and psychological dissimilarity, the exponential fits were less steep when fit with the dissimilarity metric (target-distractor distance slope = 24.71, SE = 0.76; psychological dissimilarity slope = 5.67, SE = 0.13). Although model parameters could not be compared directly, because of the nonlinear model fitting approach, this suggests that psychological dissimilarity accounts for a portion of the nonlinearity in visual search RTs. 
Experiment 1b and 1c: Are influences of color similarity specific to certain color values?
In Experiment 1a, the target color was randomly selected on each trial, meaning that the function of target-distractor distance we observed was averaged across the entire stimulus space we used. If there are inhomogeneities in the way that distance impacts performance at different points around the color space, such as the existence of categorical boundaries (e.g., the boundary between “green” hues and “blue” hues) which do not occur at consistent distances in CIELab space (Bae, Olkkonen, Allred, & Flombaum, 2015; Fang, Becker, & Liu, 2019), then the results of Experiment 1a might not be representative of any single color. To account for this possibility, we ran a version of the visual search experiment in which each participant was assigned to see one of six possible target colors (arranged evenly around the color space) throughout the entire experiment. If the effects of target-distractor distance are dependent on the position in color space, then this should be observable in the performance of the different groups. 
Method
Participants
One-hundred twenty-three undergraduate students from the University of California, San Diego subject pool participated in this experiment for course credit. We excluded three participants who had an accuracy lower than 70% in either task. The final sample of 120 participants (90 women; aged 18-25, M = 20.48 ± 1.6 years) were randomly and equally assigned to one of six experimental conditions (target colors evenly spaced 60° around the color wheel; 20 per group). This sample size provides 90% power to detect a main effect of target-distractor similarity of \(\eta _p^2\) > .107, and an interaction with target color of \(\eta _p^2\) > .183. 
Stimuli & procedure
The experiment was conducted similarly to Experiment 1a, except participants now completed visual search for the same target color on each trial. Target color cues were still presented on each trial, but this color remained the same throughout the experiment. The possible target colors were positioned at 0°, 60°, 120°, 180°, 240°, and 300° around the color wheel used in Experiment 1a. These target colors spanned the entire range of the color space and fell in distinct color categories (red, orange, yellow, green, blue, and purple, respectively). Participants performed 192 trials of the visual search task, 24 for each target-distractor distance (10°, 20°, 30°, 40°, 50°, 60°, 90°, and 180°). Participants also completed 270 trials of a color triad similarity task (Schurgin et al., 2020; Torgerson, 1958), where they had to judge on each trial which of two colors was most similar to their given target color from the visual search task (see Figure 1C). A circle in the target color was presented in the center of the display, while the two other colored circles were presented to the left and right of the target and their colors were sampled on each trial (0°, 10°, 20°, 30°, 40°, 50°, 60°, 90°, 120°, or 180°, relative to the target color), and participants were instructed to select which of the two colors was most similar to the target color. Analyses of the similarity task data are presented in the Supplementary Materials
Data analysis
Analyses excluded trials with incorrect search responses (2.9% of trials) or responses that were slower than 10 seconds or faster than 200 ms (4.0% of trials). 
Results
The results of this experiment are demonstrated in Figure 3, which depicts the mean RTs separately for each group as a function of target-distractor distance. Model fits for each group are shown in Supplementary Figures S3 and S4. Overall, we found a roughly similar pattern of visual search performance across all groups and when compared to the results of Experiment 1a, though we also observed some differences between target colors. We conducted an ANOVA with target color as a between-subject factor and target-distractor distance as a within-subject factor. Analyses revealed a main effect of target-distractor distance, F(7,798) = 373.61, p < 0.001, \(\eta _p^2\) = .742, consistent with the decrease in visual search RTs with increasing distance seen in Experiment 1a. There was also a main effect of target color, F(5,114) = 3.49, p = 0.006, \(\eta _p^2\) = .133, such that participants in some groups were on average faster than in other groups. We also found a small but reliable interaction between these factors, F(35,798) = 3.19, p < 0.001, \(\eta _p^2\) = .032, suggesting that different target colors were affected differently by target-distractor distance. Because of the small effect size of this interaction and the multiple cross-condition comparisons that could be made, identifying the nature of this interaction was not straightforward. However, we performed simple one-way ANOVAs across target color groups for each target-distractor distance level, which showed that there were significant differences across groups only at low-moderate distances of 10° (p = 0.002), 20° (p < 0.001), and 30° (p = 0.024). For 40° onward, there were no significant differences across the groups (all uncorrected p > .07). We also conducted pairwise-comparisons across target-distractor distances within each target color group, to assess the point at which RTs did not significantly improve with increasing distance. For two groups (target color 0° and 60°), RTs were significantly faster from 10° through to 40°, beyond which there was no improvement, whereas in the other 4 groups there were significant improvements in RTs up to only 30°. However, given that we had much smaller sample sizes in each group compared to Experiment 1a, this data may underestimate the RT differences across conditions. When collapsed across groups, we found that RTs were significantly slower for 10°, 20°, and 30° compared to later distances (all p < 0.002). Furthermore, RTs at 180° were significantly faster compared to those at 40°, 50°, and 90° (p < 0.01; marginally significant difference between 60° and 180°, p = 0.053). 
Figure 3.
 
Mean visual search RTs for each target color group in Experiment 1b. Solid lines correspond to the mean RT for each target-distractor distance, with each line drawn in its particular target color. Error bars are omitted for visibility. Model fits can be found in the supplement.
Figure 3.
 
Mean visual search RTs for each target color group in Experiment 1b. Solid lines correspond to the mean RT for each target-distractor distance, with each line drawn in its particular target color. Error bars are omitted for visibility. Model fits can be found in the supplement.
Because Experiment 2b was run between subjects, we conducted a within-subject replication in which we tested the same six target colors and target-distractor distances of 10°, 30°, 60°, and 180°. We replicated the main effect of differences among target colors only at the low target-distractor distances (10° and 30°), with slowest RTs for colors in the 0° to 60° range of the color wheel (pinkish colors). There was no hint of any differences at larger target-distractor distances (i.e., 180°). The results of this additional Experiment 1c are reported in detail in the Supplementary Materials
Thus, although we observed reliable differences between target colors across these two experiments, these were only present for small target-distractor distances. Interestingly, pinkish colors (0°–60° around this CIELAB color wheel) resulted in the largest RTs. This is notable since previous work (Schurgin et al., 2020), as well as our own data (see Supplementary Figure S2) shows that psychological dissimilarity increases slowest in this region of color space, suggesting that colors 10° away from this point (e.g., red at 0°) are highly perceptually confusable and, hence, have the slowest visual search RTs. Thus most of the differences among different visual search target colors appears to be driven by differences in perceptual dissimilarity. Of most interest to the present study, the results of Experiment 1b and 1c demonstrate that the general function of target-distractor distance is not an artifact of averaging across trials with different target colors, nor does performance depend on differences across color categories. 
Experiment 2: How do highly similar colors affect search efficiency (search slope)?
Experiment 1 revealed a nonlinear relationship between stimulus-based feature distance and visual search response times, consistent with previous theoretical and empirical work (Blough, 1988; Duncan & Humphreys, 1989; Nagy & Cone, 1996; Nagy & Sanchez, 1990; Wolfe & Horowitz, 2004; Wolfe et al., 1999); furthermore, the results suggest that carefully quantified psychological dissimilarity—which is nonlinearly related to stimulus distance of a given feature space—explains a portion of the nonlinearities observed in visual search performance. Although Experiment 1 focused on measuring overall search RTs across different target-distractor distances, a more sensitive measure of search performance may be search efficiency, which is often used to index perceptual similarity between targets and distractors. Search efficiency can be measured using search slopes, estimated by manipulating the number of nontarget items present in the search display, with the general idea that shallower search slopes require less attentional scrutiny (more efficient search) relative to steeper search slopes (less efficient search). Therefore, in Experiment 2, we had participants perform visual search while manipulating both the target-distractor similarity, as in Experiment 1, and the number of distractors. On half of the trials, we presented the target item among seven distractors, as in Experiment 1, whereas on the other half of trials only two distractors were presented. Our main goal was to assess how stimulus-based distance and psychological dissimilarity between targets and distractors would relate to search efficiency. 
Method
Participants
Thirty-six undergraduate students from the University of California, San Diego subject pool participated in this experiment for course credit. We excluded two participants whose average accuracy was below 70% in either task, leaving a final sample of 34 participants (25 women, seven men, two did not report gender), aged 18 to 32 (M = 21.7 ± 3.7 years). This sample size provides 80% power to detect an interaction between target-distractor distance and the number of distractor items at \(\eta _p^2\) > .280. 
Stimuli & procedure
Colors were drawn from the same set as Experiment 1. A target color was randomly selected on each trial, and the distance between target and distractor colors was manipulated across six levels: 10°, 15°, 20°, 30°, 60°, or 180°. Distractor colors were randomly chosen clockwise or counterclockwise relative to the target. Participants completed 384 trials of visual search, in which target-distractor similarity and distractor set size (two or seven distractors) were manipulated. When two distractors were presented on a trial, the remaining search array items remained as unfilled, outlined circles, and were not selectable by participants as the target location. The target color was cued before each trial, as in Experiment 1
Participants also completed 396 trials of the quad color similarity task, with each pair differing by 0°, 10°, 15°, 20°, 30°, 60°, 90°, 120°, or 180°, and each combination of pairs repeated 11 times. 
Data analysis
Data analysis was conducted similarly to Experiment 1. For the visual search task, a repeated-measures ANOVA with factors target-distractor distance and number-of-distractors was conducted, and significant interactions and main effects were followed up by simple pairwise comparisons with FDR correction. Analyses excluded trials with incorrect search responses (6.95% of trials) or responses that were slower than 10 seconds or faster than 200 ms (8.6% of trials). 
For exponential modeling, we adapted our previous method to allow for estimation of how model parameters differed by the number of distractors in the display. Specifically, we fit an expanded model with six parameters, three for each distractor set size, such that the full model function had the form \(RT = (A_0 + A_1 \times N D_7)e^{ - ({}{L}_0 + {L}_1 \times N{D}_7) \times {distance}} + {F}_0 + {F}_1 \times N{D}_7\), where “ND7” corresponds to a dummy parameter encoding trials on which there were seven, rather than two, distractor items. To assess whether each parameter significantly varied as a function of the number of distractors, we compared the full model (six parameters) to models in which each parameter was fixed across distractor set size (i.e., parameter A1 was set to zero) using chi-square tests. The full model was initially fit with random effects for each parameter by subject (six random effects), after which the random effect of F1 was removed because it explained a small amount of variance, and the random effect of L1 was removed because it was highly correlated with other random effects. 
Results
Performance was high on the similarity task (M = 86.4%, SD = 6.4), and MLDS estimates of psychological dissimilarity were nearly identical to Experiment 1a (Figure 1D). 
Stimulus distance
The overall pattern of visual search performance was broadly similar to Experiment 2a and is demonstrated in Figure 4A. In the visual search task, there was again a main effect of target-distractor distance, F(5,165) = 140.83, p < 0.001, \(\eta _p^2\) = .810, and an interaction between target-distractor distance and number of distractors, F(5,165) = 11.91, p < 0.001, \(\eta _p^2\) = .265. However, there was no main effect of the number of distractor items, F(1,33) = 0.90, p = .349, \(\eta _p^2\) = .027. RTs were significantly slower for seven than two distractors at a target-distractor distance of 10°, p = 0.001, and marginally slower at 15°, p = 0.069, with each additional distractor increasing RTs by an average of 70.7 ms and 17.8 ms respectively. There was no effect at 20°, p = .761 suggesting a gradual decrease in the effect of distractor items as target-distractor distance increases between 10° and 15°, which then completely dissipates at 20° (Figure 4C). Additionally, we found a reversal of this effect, with slower RTs for two than seven distractors, at target-distractor distance of 180°, p = 0.001, with additional distractors reducing RTs by 14.6 ms/item on average. Distractors did not significantly affect RTs at 30° or 60°, p > 0.138, with slopes <10 ms/item. Despite distractors no longer slowing performance beyond 20°, overall RTs continued to improve, similar to Experiment 1a. With 2 distractors RTs were significantly slower at 10° to 20° compared to later distances (p < 0.001), whereas later distances did not differ from one another (p > 0.06). In contrast, with 7 distractors, RTs were significantly faster for each increase in target-distractor distance (p < 0.024). 
Figure 4.
 
Top panels: mean visual search response times for Experiment 2 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Data is plotted separately for trials where there were two or seven distractors. Dashed lines correspond to exponential model fits. Bottom panels: difference in response times for trials with two versus seven distractors as a function of (C) target-distractor distance, (D) psychological dissimilarity. All error bars are within-subject SEM.
Figure 4.
 
Top panels: mean visual search response times for Experiment 2 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Data is plotted separately for trials where there were two or seven distractors. Dashed lines correspond to exponential model fits. Bottom panels: difference in response times for trials with two versus seven distractors as a function of (C) target-distractor distance, (D) psychological dissimilarity. All error bars are within-subject SEM.
As in Experiment 1a, exponential model fits captured performance well (Figures 4A, 4B). Model comparisons based on target-distractor distance revealed that the number of distractors in the display significantly modulated the model intercept (A), \(\chi _5^2\) = 94.0, p < 0.001, slope (L), \(\chi _1^2\) = 257.4, p < 0.001, and floor (F), \(\chi _1^2\) = 121.0, p < 0.001. The direction of the parameter estimates indicated that more distractors resulted in slower RTs at smaller target-distractor distances, steeper slopes, and slightly faster RTs at higher target-distractor distances. Fits using psychological dissimilarity were again less steep than those using target-distractor distance, consistent with psychological dissimilarity accounting for some of the nonlinearity observed in the search data. 
Overall, these results indicate that target-distractor distance impairs the efficiency of visual search only at minimal distances across the color wheel: only when targets and distractors were <20° apart in stimulus-based distance around the color wheel, search efficiency—operationalized as search slope—was affected such that participants were slower to find the target when more distractors were present. However, we also found evidence that participants were faster to detect targets when there were more high-distance distractors in the display (Bravo & Nakayama, 1992), which may imply that more efficient texture segmentation processes are involved on such trials (an idea we return to in more detail in the Discussion). 
Experiment 3: How does target-distractor similarity affect sustained feature-based selection?
In Experiment 3, we used a sustained feature-based attention task to test if our findings generalize to other tasks commonly used in the field of feature-based selection. We implemented a feature-based attention task that uses spatially overlapping random-dot kinematograms (RDKs). Participants were presented on each trial with a RDK containing two sets of spatially intermingled, differently colored dots, with each dot moving independently in a random direction. The colors of the two sets of dots were manipulated from trial to trial, and a central fixation point indicated the target color on a given trial. Participants were instructed to attend selectively to the dots in the target color to detect a brief decrease in the luminance of the dots, while ignoring distractor changes. Therefore performance on this task indexes participants’ ability to selectively attend to the target color as the distractors are made more or less similar in color, independently of spatial attention. 
Method
Participants
Forty-three undergraduate students from the University of California, San Diego subject pool participated in this experiment for course credit. Five participants were excluded from analyses based on preregistered exclusion criteria (dʹ < 0.5 with 180° distractors). The final sample therefore consisted of 38 participants (24 women), aged 18-24 (M = 19.9 ± 1.5 years), with normal or corrected-to-normal vision, and provides 80% power to detect a significant effect of \(\eta _p^2\) > .258. 
Stimuli
Participants completed this experiment in lab and were seated at a distance of approximately 60 cm from the display. A centered circular field of dots (5.8° visual angle radius) was presented on a black background. This field contained 200 dots moving independently and randomly at 2.25°/s. Half of the dots were presented in the target color, and half in the distractor color. A square cue (0.5° × 0.5°) was presented in the target color at the center of the dot field, indicating on each trial which set of dots participants should attend. To prevent participants from tracking single dots, each dot had a limited lifetime and was redrawn at a new random location every 300 ms. 
Colors were drawn from the same set as Experiment 1. A target color was randomly selected on each trial, and the distance between target and distractor colors was manipulated across 6 levels (15°, 30°, 45°, 60°, 90°, or 180°). Distractor colors were randomly chosen clockwise or counterclockwise relative to the target. 
Procedure
In the sustained attention task (Figure 5), participants attended to dots in a particular color for the duration of the trial. At the beginning of each trial, the target was indicated by the color of the central cue, which was presented for a random duration between 400 to 800 ms. After this time, the target and distractor dots appeared on the display simultaneously and remained onscreen until the end of the trial, 2000 ms later. Participants were instructed to attend to the dots in the cued color for the duration of the trial to detect brief decreases in luminance (300 ms). The luminance decrease could appear randomly throughout the trial with the constraint that it could not occur in the first or last 300 ms of the stimulus presentation. At the end of each trial, participants indicated whether this change occurred in the target dots by responding on the keyboard (“m” for a target change, “n” for no target change). The luminance change could occur in the target dots (50% of trials), the distractor dots (25%), or neither set of dots (25%). Participants completed 288 trials of this task (48 per distractor distance condition), separated into six equally sized blocks. 
Figure 5.
 
Example trial structure for Experiment 3. Participants were instructed to attend to dots in a particular color (indicated by the square in the center of the display) to detect any decreases in luminance of the dots that could occur during the trial. Stimuli in the figure are not presented to scale.
Figure 5.
 
Example trial structure for Experiment 3. Participants were instructed to attend to dots in a particular color (indicated by the square in the center of the display) to detect any decreases in luminance of the dots that could occur during the trial. Stimuli in the figure are not presented to scale.
The magnitude of the luminance decrease was determined for each individual at the beginning of the experiment session through a thresholding task. Participants completed 32 trials per thresholding run, in which the luminance decrease was adjusted using a staircasing method: the change became smaller (less detectable) after two consecutive correct responses, and larger (more detectable) after an incorrect response. The luminance decrement was initially set at 50% of the maximum luminance of the dots and was adjusted by 2% with each step. During the thresholding task, the target-distractor distance was set to 180°. Accuracy was fit with a logistic curve using the Palamedes toolbox (Prins & Kingdom, 2009) with a guess rate of 50%, and thresholds were selected as the luminance decrement corresponding to 70% accuracy. Participants completed 1-4 runs of the thresholding task until performance was adequately estimated (M = 2.22 runs, SD = 0.95). 
After completing the main task, we assessed perceptual similarity around the color wheel using a color triad task (Schurgin et al., 2020, Torgerson, 1958). On each trial, participants saw three colored circles (2.5° radius): one target color, presented in the top half of the display; and two test colors, presented side-by-side in the lower half of the display. Participants were instructed to select which of the two test circles was most similar in color to the target circle using the left or right arrow keys. The target color was chosen randomly from one of 360 positions around the color wheel, and the test colors varied systematically in their relationship to the target so that we could estimate the psychological similarity function. Specifically, the distances of the test colors from the target could be 0°, 15°, 30°, 45°, 60°, 90°, 135°, or 180°. The two test colors were selected such that they were at most three steps apart from each other in this space (e.g., 45° vs. 90° was possible, but 15° vs. 180° was not). The correct response was always the color that was closest to the target around the color wheel and was equally often presented on the left or right side of the display. We intended to collect data from 216 trials of this task, however, due to a technical error, half of the participants (21/42) only completed 144 trials. Additionally, we did not collect data for this task from one participant because of time constraints. 
Data analysis
Triad color similarity task
Psychological similarity was calculated similarly to Experiment 1, using MLDS with GLM adjusted for the triad task (Knoblauch & Maloney, 2012). 
Sustained feature-based attention task
To assess the effect of target-distractor similarity on performance in the attention task, we conducted a repeated-measures ANOVA across this factor with dʹ as the dependent variable. To calculate dʹ, we measured hit rate as the proportion of trials in which a participant detected luminance decreases in the target dots, and the false alarm rate as the proportion of trials in which a participant falsely reported a target change (i.e., the distractor changed, or no change occurred). Follow-up pairwise comparisons were conducted using FDR correction (α < .05). 
Modeling of exponential functions
As in the visual search experiments, we fit individual subject dʹ using exponential functions. The model fitting procedure was similar to Experiment 1a, as there was only a single condition to fit. As on each trial we only get a single hit or false alarm, rather than fitting performance on single trial data we used estimated dʹ across trials as in the repeated-measures analysis. The full model contained random effects for each parameter, after which the random effect of L1 was removed because of high correlations with the other random effects. For modeling with the dissimilarity measure, data from one participant were excluded due to large negative MLDS estimates for some conditions. 
Results
Performance on the triad task was high (M = 87.1%, SD = 6.9), and estimates from MLDS were extremely similar with those obtained online with the quad similarity task (Figure 1D). Comparable to Experiments 1a and 2, the midpoint in psychological distance occurred near 30° (distance = 0.49). In the sustained attention task, there was a main effect of target-distractor distance on dʹ, F(5,185) = 17.18, p < 0.001, \(\eta _p^2\) = .464, demonstrated in Figure 6A. Pairwise comparisons revealed that performance was significantly lower with 15° distractors than all other distances, p < 0.001, while at 30° performance was significantly lower than only at 180° distance, p = 0.029. For all other comparisons, there was no significant change in performance, p > 0.10. This pattern is consistent with Experiment 1, where improvement in RT flattened out at approximately 40° to 50° distance. Overall, this confirms that selection is more efficient as the distance between targets and distractors increases. For both target-distractor distance (Figure 6A) and psychological dissimilarity (Figure 6B), exponential functions captured performance reasonably well, providing another link between the sustained attention task and visual search. 
Figure 6.
 
Luminance discrimination performance for Experiment 3 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the triad color similarity task for each participant. Error bars are within-subject SEM, whereas dashed lines correspond to exponential model fits.
Figure 6.
 
Luminance discrimination performance for Experiment 3 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the triad color similarity task for each participant. Error bars are within-subject SEM, whereas dashed lines correspond to exponential model fits.
To assess what changes in behavior contributed to the effect on dʹ, we also tested changes in hit rates and false alarm rates across target-distractor distances. There was a main effect of target-distractor distance on hit rate, F(5,185) = 15.54, p < 0.001, \(\eta _p^2\) = .420, such that hit rates were lower at 15° than all other distances, p < 0.001, and for 30° than other distances, p < 0.012, but there were no differences at larger distances, p > .51. The was also a main effect on false alarms to distractor changes, F(5,185) = 8.16, p < 0.001, \(\eta _p^2\) = .221, such that false alarms were greater at 15° than other distances, p < .004. There was a marginal difference between false alarm rates at 45° and 180°, p = 0.071, but no significant difference for any other comparisons, p > 0.33. There was no difference in baseline false alarm rate (“no change” trials) across target-distractor distance, F(5,185) = 0.56, p = 0.729, \(\eta _p^2\) = .015. This response breakdown is also presented in Supplementary Figure S6
Discussion
We investigated the effect of target-distractor similarity on the efficiency of attentional selection using visual search and sustained attention tasks. Interestingly, the results from all experiments were strikingly comparable: feature similarity had a nonlinear effect on performance that plateaued at moderate distances in color space (∼40°–60° across all experiments), and the number of distractors only impacted performance when targets and distractors were highly similar (search was affected at 10° and 15°, but not at 20° or beyond). That is, even after search slopes reached zero, we still saw significant improvements in overall RT with increasing dissimilarity. Furthermore, across all experiments, exponential functions provided a decent fit to the data when performance (RT or dʹ) was predicted by both measures of similarity (in stimulus space, or in psychological similarity estimates). Thus our results are not restricted to one particular task or manipulation, but generalize across visual search and sustained attention tasks that are commonly used when studying feature-based attention. Additionally, we found comparable psychological dissimilarity functions across our online and in-lab experiments (Figure 1D), confirming the generalizability of our tasks across different testing environments. Overall, our study provides important novel data on the similarity structure of feature representations, in particular color, and demonstrates that attentional performance can in part be explained by non-linearities in the psychological dissimilarity functions. Thus these data indicate that considering the psychological similarity structure of a given stimulus space is important to understand capacity limits of attention, as it can explain some of the variance observed in behavior. The results also reveal that psychological similarity is only one part of the puzzle, and that other factors, presumably of attentional nature, are necessary to explain the nonlinearities in search and sustained attention task. 
Feature similarity constrains current models of visual search
Theories of attention agree that similarity between target and nontarget items has a significant impact on performance (Duncan & Humphreys, 1989; Geng & Witkowski, 2019; Wolfe & Horowitz, 2004), supported by a number of previous findings (Arun, 2012; Becker et al., 2013; Buetti et al., 2019; Nagy & Cone, 1996; Nagy & Sanchez, 1990; Reijnen et al., 2007; Wolfe et al., 1999). Our study extends on this research by estimating attentional performance across a wide range of target-distractor similarity levels and quantifying the interaction between similarity and the number of visual search distractors. Furthermore, we independently estimated psychological similarity and compared this to similarity in the underlying stimulus space. Previous theories have considered in detail the different processing stages involved in attentional selection, and visual search in particular. In most theories, the distinction between serial and parallel search is based on target-distractor similarity: search is parallel and efficient when similarity is low, but serial when similarity is high. Our findings argue that similarity is not the distinguishing factor between these two proposed stages, given that we found two distinct effects of similarity on behavioral measures: We found a rapid reduction in the effect of additional visual search distractors (small or non-significant search slopes, generally thought to be indicative of serial processing) at low target-distractor distance (e.g., <20°), but with a plateau in performance (no decrease in RTs or increase in detection sensitivity) at relatively moderate levels of target-distractor distance. 
Recent work has shown that the small or seemingly flat search slopes reported by many previous studies actually reflect logarithmically increasing processing time as a function of the number of distractor items during efficient search (Buetti et al., 2016; Lleras et al., 2020), and so it is likely that extending the number of distractors in our displays (e.g., from a maximum of seven to 15 or 31) would result in small increases in response times for target-distractor distances beyond 20°. However, the dissociation in the point at which search slopes are effectively eliminated and RTs reach a minimum is, to our knowledge, a novel contribution of our study and is something that several models of visual search do not anticipate—or are agnostic to. This suggests that perceptual similarity between target and distractor items has at least two distinct impacts on feature-based selection. 
Our data also indicate that psychological dissimilarity metrics in particular offer a way to unify findings from attention studies that use different stimuli. Different studies use a range of stimulus sets, ranging from basic features such as color and orientation to complex multidimensional object sets, with no simple way of comparing between these directly. Even within a single stimulus space, researchers may pick different subsets of stimuli for their experiments. Effects of target-distractor similarity might appear completely different when compared across the dimensions of each stimulus space, but psychological dissimilarity, which is based on perceptual judgments in an independent task, provides a single dimension on which different stimulus sets can potentially be directly compared. If the same critical points—such as flattening of the search slope or overall RT—emerge for different stimuli along the psychological dissimilarity axis, this could provide support for a general principle for feature-based selection based on this perceptual dissimilarity metric, unifying work done across many different stimulus spaces. While we do not test this here, we believe our study lies the groundwork for such studies in the future. 
Bridging findings from visual search and sustained feature-based attention tasks
The results of Experiment 3, where the pattern of performance during sustained feature-based attention was comparable to the visual search experiments, suggests that previous models based on analysis of visual search tasks may generalize to other attention tasks. Notably, this raises the question of how the traditional “two-stage” distinction between parallel and serial processing applies outside of the context of visual search. Sustained feature-based attention tasks have generally been conceptualized as demonstrating the global nature of feature-based attention, with all stimuli that match the target feature receiving the benefits of attention (Andersen et al., 2013; Sàenz et al., 2002, 2003; Serences & Boynton, 2007), and this global enhancement has been shown to occur rapidly during visual processing (Andersen & Müller, 2010; Martinez-Trujillo & Treue, 2004; Schoenfeld, Hopf, Merkel, Heinze, & Hillyard, 2014). This suggests that the effects of sustained feature-based attention may be more in line with the parallel processing stage of visual search models. Consistent with this idea, we found that the rate of false alarms in Experiment 3 (i.e., mistaking distractor luminance changes as target changes) was significantly increased only when target-distractor distance was 15°, the same distance below which we found effects of the number of distractors in Experiment 2, suggesting that decreased performance in sustained attention tasks may reflect the inability of attention to efficiently separate target and distractor representations at this stimulus distance. 
However, although our findings might justify surface-level comparisons between sustained attention and visual search tasks, many aspects of current models are specific to visual search and do not readily translate across tasks. Because sustained attention tasks focus on isolating aspects of selection, rather than the guiding of attention to targets, they may not be sensitive to manipulations targeted at serial stages of processing, such as those analogous to varying the number of visual search distractors. Indeed, because sustained feature-based attention tasks usually utilize target displays consisting of an array of many stimuli (such as the dot-motion displays we use in Experiment 3) they appear to be particularly sensitive to perceptual grouping of items. For example, dividing attention across displays is easier when the targets in each display share features than when they are in opposition (Sàenz et al., 2003). Perceptual grouping can also affect performance in visual search, such as when distractor items act as a texture “background” from which the target can be easily identified against (Rangelov, Müller, & Zehetleitner, 2013). In such cases, increasing the number of distractors in the display can counterintuitively improve performance (Bravo & Nakayama, 1992; Buetti et al., 2016). Additionally, having more distractors may increase the saliency of the target item, aiding search (Itti & Koch, 2000). We observed such a pattern for 180° distractors in Experiment 2, suggesting that such effects might occur in instances only when targets are sufficiently distinct from distractors. However, while segmentation can facilitate parallel visual search, it is not the same thing as perceptual grouping (Wolfe, 1992), meaning that it is unclear whether perceptual grouping can bridge the gap between visual search and other tasks or whether the similarities between tasks are driven by additional factors that are outside the scope of visual search models. Nevertheless, our findings indicate it would be valuable for researchers to consider how models of attention could be generalized to encapsulate findings from multiple different tasks. 
Attentional performance is nonlinearly related to target-distractor similarities
Across all experiments, we found that performance was non-linearly related to both the stimulus-based distance between targets and distractors, and importantly also to psychological dissimilarity, and was well described by an exponential function of similarity (see also, Blough, 1988). It would have been plausible that the commonly observed nonlinearities between attentional performance and target-distractor similarity (e.g., Arun, 2012; Nagy & Sanchez, 1990) are mostly driven by nonlinearities in perception (i.e., psychological dissimilarity function), but here we demonstrate that this is not the case. In the context of theories of generalization (Shepard, 1987; Sims, 2018), this implies that “similarity” between stimuli within attention tasks differs from similarity measured in perceptual tasks, like we used in this study. This suggests that while target and distractor features may be a particular distance apart in psychological space, which can be assessed in simple psychophysical tasks as used here, this distance is not per se ‘fixed’ but changes depending on how these features are processed. Specifically, when a target is presented among similar nontargets, the perceived target-distractor dissimilarity may be exaggerated for most efficient selection. This interpretation is analogous to findings in spatial attention, in which attention has been found to increase the perceived distance between stimuli (Suzuki & Cavanagh, 1997) and is consistent with models that describe the effects of attention through shifts of tuning or enhancement of off-tuned features (Geng & Witkowski, 2019; Navalpakkam & Itti, 2007; Scolari, Byers, & Serences, 2012; Scolari & Serences, 2009; Yu & Geng, 2019). In particular, these accounts highlight that enhancement of off-tuned features best differentiates representations when stimuli are highly similar, consistent with our finding that performance reaches a plateau with only moderate distance between targets and distractors. Of course, this is only one possible explanation for the nonlinearities we observe after accounting for psychological dissimilarity, and further research is necessary to test this interpretation. 
Summary and conclusion
In summary, our study adds in important ways to the literature on feature-based attention: (1) We compared performance across visual search and sustained attention, which bridges across different tasks and experimental designs. Similarity between targets and distractors has not been manipulated in a systematic way in sustained feature-based attention tasks, so our findings in Experiment 3 are novel in this regard, in addition to allowing direct comparisons with visual search. (2) We varied target-distractor similarity at an extremely high resolution, revealing the exact relationship between attentional performance and similarity and at different set sizes (for visual search). (3) By measuring psychological similarity, we assessed how nonlinearities in visual search RTs change with different metrics. It would have been plausible that psychological similarity was linearly related to search RT, and while our findings rule out that strong hypothesis, we do find that it accounts for nonlinear search functions to some extent. This is an important and novel piece of evidence that will contribute to models of visual attention. (4) Finally, our approach showcases a novel way to measure and potentially equate similarity spaces across distinct feature dimensions, or testing environments (e.g., in-lab and online) which will be useful for future studies on attention and cognition more broadly. We believe that this has potential to bridge across distinct stimulus spaces. While orientation and color are essentially impossible to compare in their native dimensions (orientation being a 180° space and a color wheel being 360°, e.g.) performance could be compared in psychological similarity space instead. This is also true for stimuli that do not have easy to measure dimensions, such as real-world objects. 
Overall, our study demonstrates that the non-linear relationship between target-distractor dissimilarity and attentional performance can be in part explained by non-linearities in the underlying representational similarity structure; however, psychological dissimilarity does not fully explain the non-linearities observed during selection. Thus we hypothesize that selective attention may enhance performance by modulating the similarity structure to pull target and distractor features further apart. Most broadly, our data demonstrate the importance of understanding the underlying representational structure of the feature space to inform models of selection. 
Acknowledgments
The authors thank Lora Hsu and Xinwen Wang for assistance with data collection for Experiment 3, Tim Brady for sharing initial analysis code for MLDS, and to Doug Addleman, Kirsten Adam, and John Serences for comments and feedback on an earlier version of this manuscript. We also thank Jeremy Wolfe and another anonymous reviewer for their thoughtful comments on this paper. 
Supported by a grant from the National Science Foundation (BCS-1850738). 
Open data and analysis scripts are available on OSF at: https://osf.io/swdqk/
Commercial relationships: none. 
Corresponding author: Angus Chapman. 
Address: Department of Psychology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0109, USA. 
Footnotes
1 (In this article, we use the term “feature” to refer to specific values that occur within a feature dimension [e.g., “red” or “upward motion”] rather than to the underlying dimensions themselves [e.g., “color” or “motion direction”]).
Footnotes
2  (Attentional engagement theory also posits that similarity among distractors also affects search performance, such that more diverse sets of distractors decrease performance. However, this aspect of the model was not the focus of the current study, and distractors did not differ from one another within each trial.)
References
Alexander, R. G., & Zelinsky, G. J. (2012). Effects of part-based similarity on visual search: The Frankenbear experiment. Vision Research, 54, 20–30, https://doi.org/10.1016/j.visres.2011.12.004. [CrossRef] [PubMed]
Andersen, S. K., Hillyard, S. A., & Müller, M. M. (2008). Attention facilitates multiple stimulus features in parallel in human visual cortex. Current Biology, 18, 1006–1009, https://doi.org/10.1016/j.cub.2008.06.030. [CrossRef]
Andersen, S. K., Hillyard, S. A., & Müller, M. M. (2013). Global facilitation of attended features is obligatory and restricts divided attention. Journal of Neuroscience, 33, 18200–18207, https://doi.org/10.1523/JNEUROSCI.1913-13.2013. [CrossRef]
Andersen, S. K., & Müller, M. M. (2010). Behavioral performance follows the time course of neural facilitation and suppression during cued shifts of feature-selective attention. Proceedings of the National Academy of Sciences, 107, 13878–13882, https://doi.org/10.1073/pnas.1002436107. [CrossRef]
Andersen, S. K., Müller, M. M., & Hillyard, S. A. (2009). Color-selective attention need not be mediated by spatial attention. Journal of Vision, 9(6), 1–7, https://doi.org/10.1167/9.6.2. [CrossRef] [PubMed]
Arun, S. P. (2012). Turning visual search time on its head. Vision Research, 74, 86–92, https://doi.org/10.1016/j.visres.2012.04.005. [PubMed]
Bae, G.-Y., Olkkonen, M., Allred, S. R., & Flombaum, J. I. (2015). Why some colors appear more memorable than others: A model combining categories and particulars in color working memory. Journal of Experimental Psychology: General, 144, 744–763, https://doi.org/10.1037/xge0000076. [PubMed]
Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48, https://doi.org/10.18637/jss.v067.i01.
Becker, S. I., Folk, C. L., & Remington, R. W. (2013). Attentional capture does not depend on feature similarity, but on target-nontarget relations. Psychological Science, 24, 634–647, https://doi.org/10.1177/0956797612458528. [PubMed]
Blough, D. S. (1988). Quantitative relations between visual search speed and target-distractor similarity. Perception & Psychophysics, 43, 57–71, https://doi.org/10.3758/BF03208974. [PubMed]
Bravo, M. J., & Nakayama, K. (1992). The role of attention in different visual-search tasks. Perception & Psychophysics, 51, 465–472, https://doi.org/10.3758/BF03211642. [PubMed]
Buetti, S., Cronin, D. A., Madison, A. M., Wang, Z., & Lleras, A. (2016). Towards a better understanding of parallel visual processing in human vision: Evidence for exhaustive analysis of visual information. Journal of Experimental Psychology: General, 145, 672–707, https://doi.org/10.1037/xge0000163. [PubMed]
Buetti, S., Xu, J., & Lleras, A. (2019). Predicting how color and shape combine in the human visual system to direct attention. Scientific Reports, 9(1), 1–11, https://doi.org/10.1038/s41598-019-56238-9. [PubMed]
Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research, 51, 1484–1525, https://doi.org/10.1016/j.visres.2011.04.012. [PubMed]
Chapman, A. F., & Störmer, V. S. (2021). Feature-based attention is not confined by object boundaries: spatially global enhancement of irrelevant features. Psychonomic Bulletin & Review, 28, 1252–1260, https://doi.org/10.3758/s13423-021-01897-x. [PubMed]
Duncan, J., & Humphreys, G. (1992). Beyond the Search Surface: Visual Search and Attentional Engagement. Journal of Experimental Psychology: Human Perception and Performance, 18, 578–588, https://doi.org/10.1037/0096-1523.18.2.578. [PubMed]
Duncan, J., & Humphreys, G. W. (1989). Visual Search and Stimulus Similarity. Psychological Review, 96, 433–458, https://doi.org/10.1037/0033-295X.96.3.433. [PubMed]
Fang, M. W. H., Becker, M. W., & Liu, T. (2019). Attention to colors induces surround suppression at category boundaries. Scientific Reports, 9, 1443, https://doi.org/10.1038/s41598-018-37610-7. [PubMed]
Geng, J. J., & Witkowski, P. (2019). Template-to-distractor distinctiveness regulates visual search efficiency. Current Opinion in Psychology, 29, 119–125, https://doi.org/10.1016/j.copsyc.2019.01.003. [PubMed]
Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10–12), 1489–1506, https://doi.org/10.1016/S0042-6989(99)00163-7. [PubMed]
Knoblauch, K., & Maloney, L. T. (2012). Modeling Psychophysical Data in R. Berlin: Springer Science & Business Media.
Lee, J., & Geng, J. J. (2020). Flexible weighting of target features based on distractor context. Attention, Perception, and Psychophysics, 82, 739–751, https://doi.org/10.3758/s13414-019-01910-5.
Lleras, A., Wang, Z., Ng, G. J. P., Ballew, K., Xu, J., & Buetti, S. (2020). A target contrast signal theory of parallel processing in goal-directed search. Attention, Perception, & Psychophysics, 82, 394–425, https://doi.org/10.3758/s13414-019-01928-9. [PubMed]
Lleras, A., Wang, Z., Madison, A., & Buetti, S. (2019). Predicting search performance in heterogeneous scenes: Quantifying the impact of homogeneity effects in efficient search. Collabra: Psychology, 5(1), 1–15, https://doi.org/10.1525/collabra.151.
Maloney, L. T., & Yang, J. N. (2003). Maximum likelihood difference scaling. Journal of Vision, 3, 573–585, https://doi.org/10.1167/3.8.5. [PubMed]
Martinez-Trujillo, J. C., & Treue, S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Current Biology, 14, 744–751, https://doi.org/10.1016/j.cub.2004.04.028.
Müller, M. M., Andersen, S. K., Trujillo, N. J., Valdés-Sosa, P., Malinowski, P., & Hillyard, S. A. (2006). Feature-selective attention enhances color signals in early visual areas of the human brain. Proceedings of the National Academy of Sciences, 103, 14250–14254, https://doi.org/10.1073/pnas.0606668103.
Nagy, A. L., & Cone, S. M. (1996). Asymmetries in simple feature searches for color. Vision Research, 36, 2837–2847, https://doi.org/10.1016/0042-6989(96)00046-6. [PubMed]
Nagy, A. L., & Sanchez, R. R. (1990). Critical color differences determined with a visual search task. Journal of the Optical Society of America A, 7, 1209–1217, https://doi.org/10.1364/JOSAA.7.001209.
Navalpakkam, V., & Itti, L. (2007). Search goal tunes visual features optimally. Neuron, 53, 605–617, https://doi.org/10.1016/j.neuron.2007.01.018. [PubMed]
Ng, G. J. P., Buetti, S., Patel, T. N., & Lleras, A. (2021). Prioritization in visual attention does not work the way you think it does. Journal of Experimental Psychology. Human Perception and Performance, 47, 252–268, https://doi.org/10.1037/xhp0000887. [PubMed]
Pashler, H. (1987). Target-distractor discriminability in visual search. Perception & Psychophysics, 41, 285–292, https://doi.org/10.3758/BF03208228.
Prins, N., & Kingdom, F. A. A. (2009). Palamedes: Matlab routines for analyzing psychophysical data. Retrieved from http://www.palamedestoolbox.org.
R Core Team. (2021). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/.
Rangelov, D., Müller, H. J., & Zehetleitner, M. (2013). Visual search for feature singletons: Multiple mechanisms produce sequence effects in visual search. Journal of Vision, 13(3), 1–16, https://doi.org/10.1167/13.3.22.
Reijnen, E., Wallach, D., Stöcklin, M., Kassuba, T., & Opwis, K. (2007). Color similarity in visual search. Swiss Journal of Psychology, 66, 191–199, https://doi.org/10.1024/1421-0185.66.4.191.
Rosenholtz, R., Huang, J., Raj, A., Balas, B. J., & Ilie, L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of Vision, 12, 1–17, https://doi.org/10.1167/12.4.14.
Sàenz, M., Buraĉas, G. T., & Boynton, G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5, 631–632, https://doi.org/10.1038/nn876. [PubMed]
Sàenz, M., Buraĉas, G. T., & Boynton, G. M. (2003). Global feature-based attention for motion and color. Vision Research, 43, 629–637, https://doi.org/10.1016/S0042-6989(02)00595-3. [PubMed]
Schoenfeld, M. A., Hopf, J.-M., Merkel, C., Heinze, H.-J., & Hillyard, S. A. (2014). Object-based attention involves the sequential activation of feature-specific cortical modules. Nature Neuroscience, 17, 619–624, https://doi.org/10.1038/nn.3656. [PubMed]
Schurgin, M. W., Wixted, J. T., & Brady, T. F. (2020). Psychophysical scaling reveals a unified theory of visual memory strength. Nature Human Behaviour, 4, 1156–1172, https://doi.org/10.1038/s41562-020-00938-0. [PubMed]
Scolari, M., Byers, A., & Serences, J. T. (2012). Optimal deployment of attentional gain during fine discriminations. Journal of Neuroscience, 32, 7723–7733, https://doi.org/10.1523/JNEUROSCI.5558-11.2012.
Scolari, M., & Serences, J. T. (2009). Adaptive allocation of attentional gain. Journal of Neuroscience, 29, 11933–11942, https://doi.org/10.1523/JNEUROSCI.5642-08.2009.
Serences, J. T., & Boynton, G. M. (2007). Feature-Based Attentional Modulations in the Absence of Direct Visual Stimulation. Neuron, 55, 301–312, https://doi.org/10.1016/j.neuron.2007.06.015. [PubMed]
Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237(4820), 1317–1323. [PubMed]
Shih, S. I., & Sperling, G. (1996). Is there feature-based attentional selection in visual search? Journal of Experimental Psychology: Human Perception and Performance, 22, 758, https://doi.org/10.1037/0096-1523.22.3.758. [PubMed]
Sims, C. R. (2018). Efficient coding explains the universal law of generalization in human perception. Science, 360(6389), 652–656, https://doi.org/10.1126/science.aaq1118. [PubMed]
Störmer, V. S., & Alvarez, G. A. (2014). Feature-based attention elicits surround suppression in feature space. Current Biology, 24, 1985–1988, https://doi.org/10.1016/j.cub.2014.07.030.
Suchow, J. W., Brady, T. F., Fougnie, D., & Alvarez, G. A. (2013). Modeling visual working memory with the MemToolbox. Journal of Vision, 13(10), 1–8, https://doi.org/10.1167/13.10.9.
Suzuki, S., & Cavanagh, P. (1997). Focused attention distorts visual space: An attentional repulsion effect. Journal of Experimental Psychology: Human Perception and Performance, 23(2), 443–463, https://doi.org/10.1037/0096-1523.23.2.443. [PubMed]
Torgerson, W. S. (1958). Theory and Methods of Scaling. Hoboken, NJ: Wiley.
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136, https://doi.org/10.1016/0010-0285(80)90005-5. [PubMed]
Treue, S., & Martinez-Trujillo, J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399, 575–579, https://doi.org/10.1038/21176. [PubMed]
Vighneshvel, T., & Arun, S. P. (2013). Does linear separability really matter? Complex visual search is explained by simple search. Journal of Vision, 13(11), 1–24, https://doi.org/10.1167/13.11.10.
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., & Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), 1686, https://doi.org/10.21105/joss.01686.
Wolfe, J. M. (1992). “Effortless” texture segmentation and “parallel” visual search are not the same thing. Vision Research, 32, 757–763, https://doi.org/10.1016/0042-6989(92)90190-T. [PubMed]
Wolfe, J. M. (1994). Guided Search 2.0 A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238, https://doi.org/10.3758/BF03200774. [PubMed]
Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5, 495–501, https://doi.org/10.1038/nrn1411. [PubMed]
Wolfe, J. M., Klempen, N. L., & Shulman, E. P. (1999). Which end is up? Two representations of orientation in visual search. Vision Research, 39, 2075–2086, https://doi.org/10.1016/S0042-6989(98)00260-0. [PubMed]
Yu, X., & Geng, J. J. (2019). The attentional template is shifted and asymmetrically sharpened by distractor context. Journal of Experimental Psychology: Human Perception and Performance, 45, 336–353, https://doi.org/10.1037/xhp0000609. [PubMed]
Figure 1.
 
(A) Example of a visual search trial with a target stimulus among seven distractors 180° distant on the color wheel. Stimuli and text not to scale. (B) Example of a quad similarity task trial. Participants were instructed to select which pair (top or bottom) were least similar in color. (C) Example of triad similarity task trial. Participants were instructed to select which item (left or right) was most similar in color to the center stimulus. (D) Psychological dissimilarity estimates from MLDS for Experiments 1a, 2 (online, quad similarity task), and 3 (in-lab, triad similarity task). The CIELab color wheel used in the experiments is presented alongside this function. For all experiments, psychological dissimilarity reached 50% at approximately 30° around the color wheel.
Figure 1.
 
(A) Example of a visual search trial with a target stimulus among seven distractors 180° distant on the color wheel. Stimuli and text not to scale. (B) Example of a quad similarity task trial. Participants were instructed to select which pair (top or bottom) were least similar in color. (C) Example of triad similarity task trial. Participants were instructed to select which item (left or right) was most similar in color to the center stimulus. (D) Psychological dissimilarity estimates from MLDS for Experiments 1a, 2 (online, quad similarity task), and 3 (in-lab, triad similarity task). The CIELab color wheel used in the experiments is presented alongside this function. For all experiments, psychological dissimilarity reached 50% at approximately 30° around the color wheel.
Figure 2.
 
Mean visual search response times in Experiment 1a as a function of (A) stimulus-based target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Error bars are within-subject SEM, whereas red dashed lines correspond to exponential model fits.
Figure 2.
 
Mean visual search response times in Experiment 1a as a function of (A) stimulus-based target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Error bars are within-subject SEM, whereas red dashed lines correspond to exponential model fits.
Figure 3.
 
Mean visual search RTs for each target color group in Experiment 1b. Solid lines correspond to the mean RT for each target-distractor distance, with each line drawn in its particular target color. Error bars are omitted for visibility. Model fits can be found in the supplement.
Figure 3.
 
Mean visual search RTs for each target color group in Experiment 1b. Solid lines correspond to the mean RT for each target-distractor distance, with each line drawn in its particular target color. Error bars are omitted for visibility. Model fits can be found in the supplement.
Figure 4.
 
Top panels: mean visual search response times for Experiment 2 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Data is plotted separately for trials where there were two or seven distractors. Dashed lines correspond to exponential model fits. Bottom panels: difference in response times for trials with two versus seven distractors as a function of (C) target-distractor distance, (D) psychological dissimilarity. All error bars are within-subject SEM.
Figure 4.
 
Top panels: mean visual search response times for Experiment 2 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the quad color similarity task for each participant. Data is plotted separately for trials where there were two or seven distractors. Dashed lines correspond to exponential model fits. Bottom panels: difference in response times for trials with two versus seven distractors as a function of (C) target-distractor distance, (D) psychological dissimilarity. All error bars are within-subject SEM.
Figure 5.
 
Example trial structure for Experiment 3. Participants were instructed to attend to dots in a particular color (indicated by the square in the center of the display) to detect any decreases in luminance of the dots that could occur during the trial. Stimuli in the figure are not presented to scale.
Figure 5.
 
Example trial structure for Experiment 3. Participants were instructed to attend to dots in a particular color (indicated by the square in the center of the display) to detect any decreases in luminance of the dots that could occur during the trial. Stimuli in the figure are not presented to scale.
Figure 6.
 
Luminance discrimination performance for Experiment 3 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the triad color similarity task for each participant. Error bars are within-subject SEM, whereas dashed lines correspond to exponential model fits.
Figure 6.
 
Luminance discrimination performance for Experiment 3 as a function of (A) target-distractor distance around the color wheel, (B) psychological dissimilarity, as measured with the triad color similarity task for each participant. Error bars are within-subject SEM, whereas dashed lines correspond to exponential model fits.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×