April 2015
Volume 15, Issue 5
Free
Article  |   April 2015
Cue combination anisotropies in contour integration: The role of lower spatial frequencies
Author Affiliations
Journal of Vision April 2015, Vol.15, 17. doi:https://doi.org/10.1167/15.5.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Malte Persike, Günter Meinhardt; Cue combination anisotropies in contour integration: The role of lower spatial frequencies. Journal of Vision 2015;15(5):17. https://doi.org/10.1167/15.5.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The combination of local orientation collinearity and spatial frequency contrast in contour integration was studied in two experiments using a 2AFC contour detection and discrimination task. Target contours were defined by local orientation collinearity, spatial frequency contrast between contour and background elements, or both cues. Experiments differed in the source of spatial frequency contrast by manipulating the spatial frequency of either contour or background elements. Cue summation gains, defined as the performance benefit of double cue conditions over single cue conditions, were evaluated and tested against the predictions derived from probability summation and linear summation. Summation gains were generally stronger than linear summation and tended to increase with the single-cue performance level until limited by ceiling effects. Cue summation was particularly large when contour elements exhibited a lower spatial frequency than background elements, regardless of the absolute spatial frequency ranges. The highly effective integration of lower spatial frequency contours in cluttered surrounds is discussed in the context of recent findings on high-level neural representations of contour integration as well as feature synergy.

Introduction
Detecting visual objects requires decomposition of a visual scene into parts and reintegration of these parts into meaningful wholes. The association field model (Field, Hayes, & Hess, 1993) describes a very efficient method for the visual system to integrate spatially disjunct elements into a complete percept. The local association field is a bottom-up mechanism where lateral interconnections of neighboring cells with similar orientation tuning render a contour visible by mutual facilitation between those cells (Hess & Field, 1999). Intercolumnar synaptic fibers spanning preferentially between neurons with similar orientation tuning have been found in the tree shrew (Bosking, Zhang, Schofield, & Fitzpatrick, 1997), the cat (Schmidt, Goebel, Lowel, & Singer, 1997), and the macaque (Malach, Amir, Harel, & Grinvald, 1993) although evidence for the latter is less clear with respect to the existence as such (Angelucci et al., 2002; Stettler, Das, Bennett, & Gilbert, 2002) as well as the correspondence between the spatial properties of horizontal projections and the spatial characteristics of contour perception (Li & Gilbert, 2002). A wealth of psychophysical studies (for a review, see Hess, Hayes, & Field, 2003), electrophysiological investigations (Bauer & Heinze, 2002; Gilad, Meirovithz, & Slovin, 2013), and model simulations (Ernst et al., 2012; Hansen & Neumann, 2008; Li, 1998) have since supported the notion that contour integration may be achieved by local interactions of neurons early in the visual pathway. Although initial accounts placed the association field in visual areas as early as V1, recent findings suggest the involvement of higher levels like the lateral occipital complex (LOC; Kourtzi & Huberle, 2005; Shpaner, Molholm, Forde, & Foxe, 2013) in building a global contour percept (for a review, see Loffler, 2008). 
Most studies of contour integration capitalized on disruptive manipulations to a contour in order to map out the functional constraints of the association field. Contour integration is known to suffer from too much curvature (Pettet, 1999), large interelement distances (Beaudot & Mullen, 2003), high spatial frequency disparities between contour elements (Persike, Olzak, & Meinhardt, 2009), asynchronous flicker of contour elements (Hess, Beaudot, & Mullen, 2001), and increased eccentricity of the contour (Hess & Dakin, 1997), just to name a few. Owing to this line of research, we know which manipulations are detrimental to contour salience and which manipulations keep contour salience largely intact, such as variations in spatial phase (Field, Hayes, & Hess, 1997; Hansen & Hess, 2006) or dichoptic presentation (Huang, Hess, & Dakin, 2006). 
On the flip side, only few studies have been dedicated to potentially beneficial effects of additional cues in contour integration. When a second cue is added to a contour that is already defined by collinear element orientations, two principal modes of cue combination can be distinguished. First, increased salience can result as a combined effect of two independent detection mechanisms, one operating on collinear orientations (i.e., the association field), and the other on the second feature, possibly in form of a feature contrast detector. Given the perceptual independence of both mechanisms, the salience of a contour is then determined by the probability of detecting the contour based on orientation collinearity, on the second feature, or both. Benefits in contour integration performance should, in this case, be compatible with probability summation among independent neural mechanisms (Green & Swets, 1988). Second, the salience gain may be due to intrinsic properties of the contour integration mechanism and thus lead to larger summation gains than expected from mere probability summation. Such oversummative combination of cues has been observed for conjunctions of several types of cues such as texture and color (Saarela & Landy, 2012), orientation and spatial frequency (Persike & Meinhardt, 2008; Straube & Fahle, 2010), or shape and luminance (Johnston, Cumming, & Landy, 1994), just to name a few. One mechanism to cause oversummation in contour integration tasks is facilitative interplay between orientation detectors and input from the second feature channel (Meinhardt, Schmidt, Persike, & Röers, 2004). Instead of being processed independently, collinearity signals, and information from the second feature channel would interact synergistically (Kubovy, Cohen, & Hollier, 1999) and drive contour salience. 
The studies that have tried to discern between different integration modes for the combination of element collinearity and additional cues showed mixed results. Performance benefits from the combination of element collinearity with a motion cue (Ledgeway, Hess, & Geisler, 2005) and depth cues (Hess, Hayes, & Kingdom, 1997; Altmann, Bülthoff, & Kourtzi, 2003) were no larger than would be expected from independent cue processing. Only two studies that we know of have successfully reported otherwise. Adding in-phase flicker to contour elements (Bex, Simmers, & Dakin, 2001) and supplementing contours with a coincident texture cue (Machilsen & Wagemans, 2011) both yielded salience gains significantly above probability summation. Such heterogeneous results with different types of cues should be expected, given that the impact of feature combination on perceptual salience is highly dependent on the specific features used (Nothdurft, 2000). In a feature contrast salience matching task, pairing a color cue with an orientation cue pointed to independence between mechanisms, resulting in probability summation of single cue saliences. Adding a motion cue to an orientation cue on the other hand revealed considerable interplay between salience mechanisms, supporting an interactive processing regime. This indicates constraints in the architecture of the visual system governing the degree to which different features might interact. The mixed evidence on feature combinations in contour integration, therefore, may very well result from the particular choice of features. 
Considering the architecture of early visual sites and specifically V1, spatial frequency could be a promising candidate to evoke oversummative salience gains in contour integration. Combinations of spatial frequency and orientation are effortlessly perceived (Sagi, 1988), possibly due to the fact that orientation and spatial frequency are jointly coded by V1 neurons (De Valois, Albrecht, & Thorell, 1982). Moreover, the combination of orientation and spatial frequency cues in figure detection (Persike & Meinhardt, 2006) and identification task (Meinhardt, Persike, Mesenholl, & Hagemann, 2006) produces salience gains, which are far larger than probability summation. Whereas the association field model assumes contour integration to be the result of lateral facilitation among orientation and spatial frequency selective units, it is straightforward to ask whether combining element collinearity with a spatial frequency cue enhances contour integration beyond what would be expected from independent feature processing. 
That being said, the use of spatial frequency as a second cue to scrutinize cue combination effects in contour integration is not without challenges. Although contour integration performance is invariant over a wide range of carrier spatial frequencies (Dakin & Hess, 1998), the visual system exhibits prominent asymmetries in the processing of spatial frequency information. Low spatial frequencies elicit a processing advantage in grating detection tasks (Tolhurst, 1975; Vassilev & Mitov, 1976) and search tasks (Carrasco, McLean, Katz, & Frieder, 1998), where low spatial frequency targets are detected faster and more accurately than high spatial frequency targets. Low spatial frequencies also play an important role in shape representation (Navon, 1977). Global shape processing is assumed to start earlier when low spatial frequencies are present in a target stimulus (Han, Yund, & Woods, 2003), perhaps due to transient mechanisms operating preferentially at low spatial frequencies and sustained mechanisms at high spatial frequencies (Legge, 1978). Fast, scale selective pathways tuned to low spatial frequencies within the dorsal stream modulate the processing of more slowly conducted feedforward inputs to areas V1, V4, and IT (Chen et al., 2007). Many studies further agree that the left visual field (right hemisphere) is more efficient at processing low spatial frequencies, whereas the right visual field (left hemisphere) is more efficient with high spatial frequencies (Sergent & Hellige, 1986), although newer evidence suggests significant involvement of higher order attentional mechanisms, which are independent of spatial frequency (Hübner, 1997). The hemispheric asymmetry not only applies to absolute spatial frequency bands but also to the relative spatial frequency disparity between a stimulus and its surround, and it is of different magnitude between detection and discrimination tasks (Christman, Kitterle, & Hellige, 1991). 
Consequently, when using spatial frequency cues in shape perception tasks, care should be taken as to which absolute and relative ranges of spatial frequencies are realized. The present study therefore employed two variants of spatial frequency contrast. Contour elements either exhibited a higher or a lower spatial frequency than background elements. We conducted two experiments to examine the gain in contour salience caused by combining orientation collinearity with a spatial frequency cue. The two experiments differed with respect to the source of feature contrast variation. In the first experiment, the spatial frequency of contour elements was shifted upward or downward against the constant background spatial frequency. The second experiment reversed this logic by keeping the spatial frequency of contour elements constant and shifting the background spatial frequency in upward or downward direction. 
Methods
Participants
Eighteen undergraduate students (15 female) served as observers in Experiment 1, 18 undergraduate students (14 female) in Experiment 2. All participants had normal or corrected-to-normal vision. The observers had no former psychophysical experience, were paid and not informed about the purpose of the experiment. All participants signed a written consent form according to the World Medical Association Helsinki Declaration and were informed that they could withdraw from the experiment at any time without penalty. The experiment complied with the relevant institutional and national regulations and legislation. 
Stimuli
Stimulus displays consisted of approximately n = 225 Gabor micropatterns defined by    
In (1) let ω = φ 180 π MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qacqaHjpWDcqGH9aqpdaWcaaWdaeaapeGaeqOXdOgapaqaa8qacaaIXaGaaGioaiaaicdaaaGaeqiWdahaaa@3EE7@ , with φ the rotation angle measured in degrees. Spatial phase altered randomly between phase and counterphase sines. Gabor micropatterns were spatially limited to a diameter of 1.0° visual angle by setting the standard deviation σ of the Gaussian envelope to 0.2° and clipping beyond a radius of 2.5 σ-units.  
Stimulus construction fell into four stages (Figure 1). First, 225 background positions were arranged as a hexagonal grid. Second, a contour was created in a procedure akin to the pathfinder algorithm (Field et al., 1993) and superimposed onto the grid. Background positions that overlapped contour positions were removed. Contours comprised 12 elements with an interelement angle of ±30° and assumed either a circular or an “S”-shaped form. The cardinal axis of S-shaped contours was limited to a range of horizontal orientations. The bounding box of a contour always fell within a 11° × 11° square around the center of the whole 17° × 17° background lattice. This restriction was introduced to prevent contours from extending too far into the peripheral visual field where human contour integration is known to cease (Hess & Dakin, 1997). Pairwise Euclidian distances between adjacent contour positions were sampled according to the probability distribution of interposition distances in the background. Contour and background positions thus exhibited near identical spatial distributions with respect to the distances between each element and its immediate neighbors, as determined by planar Delaunay triangulation (Lee & Schachter, 1980). The probability distribution of spatial distances among background elements was estimated from simulations of 25.000 background patterns prior to the experiment. Third, a perturbation method (Braun, 1999) was used to displace background element positions. Finally, Gabor micropatterns were placed at all positions. 
Figure 1
 
Construction of stimulus displays. Depicted are the initial hexagonal grid with 225 element positions with a rectangle indicating the possible area of contour placement (1), the addition of a circular or S-shaped contour (2), the spatial diffusion of background element positions (3), and the placement of stimulus patches onto the element positions (4).
Figure 1
 
Construction of stimulus displays. Depicted are the initial hexagonal grid with 225 element positions with a rectangle indicating the possible area of contour placement (1), the addition of a circular or S-shaped contour (2), the spatial diffusion of background element positions (3), and the placement of stimulus patches onto the element positions (4).
Gabor orientations were sampled uniformly from the interval φ = [0°, 360°]. Gabor spatial frequencies were defined in octave metric as f = 2u, and u sampled uniformly from the interval [1.912, 2.262], spanning 0.35 octaves octaves with a mean of 2.09 octaves. Expressed in units of cycles per degree (cpd), spatial frequencies fell in the interval [3.76, 4.79] cpd with a median of f ˜ = 4.25 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGMbGbaGaacqGH9aqpcaaI0aGaaiOlaiaaikdacaaI1aaaaa@3B00@ cpd.  
In both experiments, we defined target stimuli with multiple levels of contour salience based on two different stimulus properties: orientation alignment and spatial frequency contrast. Target contours defined by orientation alignment were created by initially co-aligning contour elements with the global contour curvature and then tilting element orientations away from perfect collinearity, randomly in clockwise or counterclockwise direction. Increasing the “tilt angle” corrupts the fit of local element orientation with global path curvature and thus results in deteriorating contour detection performance. Target contours defined by spatial frequency contrast were created by introducing feature contrast between contour and background elements. 
Apparatus
Stimuli were generated on a ViSaGe graphics adapter (Cambridge Research Systems, Ltd., Rochester, UK) and displayed on a Samsung 959NF color monitor (Samsung Electronics Co., Ltd., Suwon, South Korea). The mean luminance of the screen was 50.1 cd/m2. Stimuli were displayed with a fixed Michelson contrast of 0.85. Gray values were taken from a gamma-corrected linear staircase consisting of 255 steps. Linearity was checked with a Cambridge Research Systems ColorCAL colorimeter. The refresh rate of the monitor was 80 Hz, the pixel resolution was set to 1348 × 1006 pixels. The room was darkened so that the ambient illumination approximately matched the illumination of the screen. Patterns were viewed binocularly at a distance of 70 cm. Participants used a chin rest for head stabilization and gave their responses with their dominant hand via an external response keyboard. 
Psychophysical task
A 2AFC detection and discrimination task was used. Participants saw two subsequent stimuli, one of which contained a target contour. With the first button press participants indicated whether the first or the second stimulus contained a target, the second button press indicated whether the target contour was a circle or an S-shape. Contour shape was randomly chosen with equal likelihood on each trial. Feedback about correctness was provided via brief tone signals for both judgements. Stimulus presentation terminated with spatial noise masking at a grain resolution of three pixels. Stimulus presentation time was selected according to the results of Braun (1999) who found saturated contour integration performance of human observers at inspection times over 350 ms. The temporal order of events was fixation (500 ms) − stimulus onset asynchrony (400 ms) − first stimulus (350 ms) − mask (400 ms) − SOA (400 ms) - second stimulus (350 ms) − mask (400 ms) − blank frame until response. Each observer began the experiment with a training period with highly salient target stimuli to learn key assignment and shape categorization, followed by calibration measurements and finally the main experiment. 
Experimental design
In both experiments, four ascending levels of contour visibility were established under five different conditions of contour definition. Single cue targets were defined by orientation tilt angle (ϕ), upward spatial frequency contrast (f), or downward spatial frequency contrast (f) of contour elements. Double cue targets were the two combinations of orientation with each spatial frequency contrast direction, denoted as ϕ + f or ϕ + f. Figure 2 gives an example of two of the four visibility levels, each under all five contour definition conditions. The source of spatial frequency feature contrast switched between the two experiments. In Experiment 1, spatial frequency increments or decrements were applied only to contour elements while all background elements remained at carrier spatial frequencies around f ˜ = 4.25 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGMbGbaGaacqGH9aqpcaaI0aGaaiOlaiaaikdacaaI1aaaaa@3B00@ . In Experiment 2, this procedure was reversed. Here, spatial frequency of background elements was shifted in upward or downward direction while all contour elements remained around f ˜ = 4.25 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGMbGbaGaacqGH9aqpcaaI0aGaaiOlaiaaikdacaaI1aaaaa@3B00@ cpd. The labels f and f thus describe the relative spatial frequency difference between contour and background elements, with the spatial frequency of contour elements serving as the reference point. In f conditions, contour elements exhibited a higher spatial frequency than background elements, regardless of whether the spatial frequency of contour elements was shifted upward (left panel in Figure 2), or the spatial frequency of background elements was shifted downward (right panel in Figure 2). The same holds for f, which could be realized by decreasing the spatial frequency of contour elements or increasing that of background elements.  
Figure 2
 
Stimuli used in the experiments. Depicted are examples for two of the four visibility levels used in the experiments. Only the central stimulus region comprising the contour is shown here. Single cue target contours were defined by orientation alignment (ϕ) or spatial frequency feature contrast in either upward (f) or downward (f) direction relative to the background. Double cue targets were generated by combining the orientation alignment cue with each of the feature contrast cues. Spatial frequency contrast was established by shifting the spatial frequency of (A) contour elements (Experiment 1) or (B) background elements (Experiment 2) in upward or downward direction.
Figure 2
 
Stimuli used in the experiments. Depicted are examples for two of the four visibility levels used in the experiments. Only the central stimulus region comprising the contour is shown here. Single cue target contours were defined by orientation alignment (ϕ) or spatial frequency feature contrast in either upward (f) or downward (f) direction relative to the background. Double cue targets were generated by combining the orientation alignment cue with each of the feature contrast cues. Spatial frequency contrast was established by shifting the spatial frequency of (A) contour elements (Experiment 1) or (B) background elements (Experiment 2) in upward or downward direction.
Calibration of perceptual equivalence
Calibration measurements were conducted to establish a set of perceptually equivalent visibility levels for different target contours. Participants first measured individual psychometric functions in a 2AFC contour detection task for the three single features (ϕ, f, or f), and both contour types (S-shape and circle). All calibration measurements were done at least twice by every participant on two different days to obtain reliable psychometric function estimates. Data sets from the two days were merged and fit with Weibull functions using the Levenberg–Marquardt algorithm (Marquardt, 1963). From those individual psychometric functions, sets of four feature values were estimated for each participant, single feature, and contour type. The four feature values were selected to corresponded to proportions correct of pc = {0.62, 0.68, 0.74, 0.80}, or d′ = {0.432, 0.661, 0.910, 1.190}, respectively. These sets were then used in the main experiment to define single cue targets (ϕ, f, and f) as well as the double cue targets (ϕ + f and ϕ + f) at four visibility levels. Due to this two-step procedure, the single cue sensitivities reported in the Results section (e.g., in Figures 3 and 4) are not just interpolated estimates from calibration measurements but were computed from data obtained during the main experiment. Slight deviations between the set of intended visibility levels and observed performance are thus to be expected. With 32 replications in each experimental condition, participants completed at least 2000 trials over the course of the whole experiment. 
Figure 3
 
Summary of main effects in Experiment 1. The Figure depicts mean d′ and proportion correct for feature contrast detection (black circles) and shape discrimination (gray squares) for contours defined by orientation (ϕ), upward or downward spatial frequency contrast (f and f), and the double cues (ϕ + f and ϕ + f). Data are shown for the four visibility levels. Error bars denote 95% confidence limits of the mean, based on the standard error of measurement of each cell. The summation gain, as expressed by q-values (6), is calculated only for the detection task. Note that single cue performance (ϕ, f, and f) is computed from data obtained in the main experiment. It may thus exhibit slight deviations from the visibility levels estimated during calibration (see Methods).
Figure 3
 
Summary of main effects in Experiment 1. The Figure depicts mean d′ and proportion correct for feature contrast detection (black circles) and shape discrimination (gray squares) for contours defined by orientation (ϕ), upward or downward spatial frequency contrast (f and f), and the double cues (ϕ + f and ϕ + f). Data are shown for the four visibility levels. Error bars denote 95% confidence limits of the mean, based on the standard error of measurement of each cell. The summation gain, as expressed by q-values (6), is calculated only for the detection task. Note that single cue performance (ϕ, f, and f) is computed from data obtained in the main experiment. It may thus exhibit slight deviations from the visibility levels estimated during calibration (see Methods).
Figure 4
 
Summary of main effects in Experiment 2. Conventions as in Figure 3.
Figure 4
 
Summary of main effects in Experiment 2. Conventions as in Figure 3.
Performance measures
Data transformation
In order to enable data analysis within the framework of factorial designs it is necessary to have an unbounded variable with at least an interval scale of measurement. Proportion correct is not appropriate since it is a bounded measure whose distribution becomes seriously skewed as the mean gets close to the upper or lower end of the scale. The sensitivity measure d′ avoids this disadvantage and is uniquely related to proportion correct in a 2AFC task (see McMillan & Creelman, 2005, p. 172). d′ is obtained from proportion correct by    
Proportions correct for perfect performance were replaced by 1 − (2n)−1, where n is the number of replications. This correction was applied to less than 3% of all observations in both tasks. 
Measure of sue summation
The detection and discrimination of compound stimuli is treated by signal detection theory. It is assumed that the observer maps each stimulus component onto a random variable, and all random variables together span a multivariate space of sensory states (Green & Swets, 1988). 
Perceptual equivalence among cues, as established during calibration, allows for a base sensitivity to be defined as the average single cue sensitivity    
Here, d ϕ MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGKbGbauaadaWgaaWcbaGaeqy1dygabeaaaaa@38FE@ and d f MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGKbGbauaadaWgaaWcbaGaamOzaaqabaaaaa@3821@ are the d′ values obtained for the single features where f is either f or f.  
For two stimulus components which are independent cues in the sense of dimensional orthogonality (Tanner, 1956), sensitivity resulting from the combination of both cues is predicted by  which has recently been termed “information summation” (Machilsen & Wagemans, 2011). Note that (4) can be viewed as a special case of the more general Minkowski summation rule Display FormulaImage not available which allows it to model a wide variety of sensitivity predictions resulting from the combination of two salience components, C1 and C2. Predictions from the Minkowski rule include information summation (p = 2), oversummative effects (1 < p < 2), and linear summation (p = 1) of cues (for an overview, see To, Baddeley, Troscianko, & Tolhurst, 2011). With perceptually equivalent cues, (4) reduces to  where d¯MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qadaqdaaWdaeaapeGabmizayaafaaaaaaa@373A@ is the base sensitivity defined in (3). Also from this base sensitivity, a measure of summation gain can be calculated as the factor by which the observed double cue performance exceeds the average single cue performance, calculated as    
Substituting the observed double cue performance, d ϕ + f MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGKbGbauaadaWgaaWcbaGaeqy1dyMaey4kaSIaamOzaaqabaaaaa@3ACB@ , in (6) with the prediction from (5) yields a predicted summation gain of q = 2 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qacaWGXbGaeyypa0ZaaOaaa8aabaWdbiaaikdaaSqabaaaaa@3907@ , or 41.4%, for the visibility of a stimulus defined by two equally visible independent feature components rather than one.  
Taken together, (3) and (5) enable a statistical assessment of whether the observed sensitivity for double cues, d ϕ + f MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGKbGbauaadaWgaaWcbaGaeqy1dyMaey4kaSIaamOzaaqabaaaaa@3ACB@ , is larger than single cue sensitivity d ¯ MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qadaqdaaWdaeaapeGabmizayaafaaaaaaa@373A@ at all, and larger than expected from independent sensory coding of both features, d . MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGKbGbauaadaWgaaWcbaGaeyyPI4fabeaakiaac6caaaa@39A3@ In the latter case, the difference between double cue performance and base sensitivity  has an expected value of d ¯ ( 2 1 ) MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qadaqdaaWdaeaapeGabmizayaafaaaamaabmaapaqaa8qadaGcaaWdaeaapeGaaGOmaaWcbeaakiabgkHiTiaaigdaaiaawIcacaGLPaaaaaa@3B8A@ and its standard error derives from the factorial decomposition of the experimental design employed for statistical testing (see Meinhardt & Persike, 2003, for a thorough formulaic derivation). Note that for all constellations of single cue performances, pϕ and pf, this benchmark is more rigorous than what can be derived from probability summation of independent feature components, calculated from proportion correct rates as p⊥ = 1 − [(1 − pϕ) (1 − pf)].  
Results
Experiment 1
Experiment 1 investigated the double cue benefit for combinations of local orientation (ϕ) with either upward (f) or downward (f) spatial frequency contrast of contour elements. In f contour elements had a higher spatial frequency relative to the background and in f they had a lower spatial frequency. The spatial frequency of background elements remained constant at f ˜ = 4.25 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGMbGbaGaacqGH9aqpcaaI0aGaaiOlaiaaikdacaaI1aaaaa@3B00@ cpd, give or take random jitter.  
Figure 3 summarizes the results from the feature contrast detection task (black symbols) and the shape discrimination task (gray symbols). Separate repeated-measures ANOVAs were conducted for detection and discrimination performance. Degrees of freedom were Huynh-Feldt corrected. Feature condition (ϕ, f, f, ϕ + f, and ϕ + f) and visibility level (1 to 4) served as within-subjects factors. Pairwise comparisons were calculated as a priori contrasts. 
There is a main effect for visibility level in detection, F(3, 51) = 158.1, p < 0.001, and discrimination, F(3, 51) = 126.9, p < 0.001. As expected, performance rises with feature level and is higher for detection than for discrimination. For the detection task results indicate that the calibration procedure successfully established perceptual equivalence among all three single cues. Their mean d′ values show little variation within one visibility level. Pairwise contrasts among ϕ, f, and f are insignificant (p > 0.05) for all but one test. We find a main effect of feature type in both detection, F(4, 68) = 52.6, p < 0.001, and discrimination, F(4, 68) = 76.3, p < 0.001. Sensitivity is consistently higher with double cues than with single cues in both the detection and discrimination task. In all conditions and for both tasks, the summation gain is larger than q = 1.72, or 72% (see Table 1) and thus exceeds the prediction derived from information summation. Finally, we find an interaction effect in detection, F(12, 204) = 3.8, p < 0.001, and discrimination, F(12, 204) = 10.1, p < 0.001, presumably resulting from ceiling effects as single cue visibility increases. Observers reach average performance levels above 30 out of 32 correct responses, which is the utmost of what can be expected in psychophysical tasks. 
Table 1
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. The table shows the base sensitivity level, Image not available, mean sensitivity for double-cue targets, Image not available, mean sensitivity difference, Image not available, and the ratio of double-cue and single-cue performance, Image not available. Data are shown for both tasks and figure types at the four feature contrast levels.
Table 1
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. The table shows the base sensitivity level, Image not available, mean sensitivity for double-cue targets, Image not available, mean sensitivity difference, Image not available, and the ratio of double-cue and single-cue performance, Image not available. Data are shown for both tasks and figure types at the four feature contrast levels.
Closer inspection of summation gains (Table 1) highlights two marked asymmetries. First, the magnitude of the cue summation benefit depends on spatial frequency direction. Double cue performance is 2.29 to 3.51 times larger than the average single cue performance for ϕ + f but only 1.83 to 2.19 times larger for ϕ + f. This is backed by pairwise contrasts between ϕ + f and ϕ + f which are highly significant on each visibility level in both detection and discrimination (all p < 0.001). Second, summation gains in the discrimination task are larger than in the detection task in most cases, indicating that cue combination facilitates shape discrimination more than contour detection. 
Experiment 2
Experiment 2 reversed the source of spatial frequency contrast compared to Experiment 1. Here, spatial frequency of contour elements remained constant at f ˜ = 4.25 MathType@MTEF@5@4@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaceWGMbGbaGaacqGH9aqpcaaI0aGaaiOlaiaaikdacaaI1aaaaa@3B00@ cpd while the mean spatial frequency of background elements shifted in upward or downward direction. To remain consistent in notation, conditions with higher spatial frequencies along the contour (i.e., with reduced background frequency) are again labeled f, and conditions with lower spatial frequencies of contour elements (i.e., with elevated background spatial frequency) are termed f.  
Figure 4 summarizes the results from the feature contrast detection task (black symbols) and the shape discrimination task (gray symbols). The analysis regime was identical to Experiment 1 and produced a very similar pattern of results. 
We find a main effect for visibility level in detection, F(3, 51) = 231.1, p < 0.001, and discrimination, F(3, 51) = 108.0, p < 0.001. Performance rises with feature level and is higher for detection than for discrimination. For the detection task results indicate that the calibration procedure successfully established perceptual equivalence among all three single cues. Their mean d′ values show little variation within one visibility level. All pairwise contrasts among ϕ, f, and f are insignificant (p > 0.05). We also find a main effect of feature type in both detection, F(4, 68) = 48.8, p < 0.001, and discrimination, F(4, 68) = 84.2, p < 0.001. Sensitivity is consistently higher with double cues than with single cues in both the detection and discrimination task. In all conditions and for both tasks, the summation gain is larger than q = 1.73, or 73% (see Table 2), again exceeding the prediction derived from information summation. Finally, we observe an interaction effect in detection, F12, 204 = 4.15, p < 0.001, and discrimination, F(12, 204) = 9.15, p < 0.001, again probably resulting from ceiling effects as single cue visibility increases. 
Table 2
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. Conventions are as in Table 1.
Table 2
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. Conventions are as in Table 1.
The pattern of summation gains (Table 2) exhibits the same asymmetries as in Experiment 1. First, the magnitude of cue summation gain depends on spatial frequency direction. Double cue performance benefits more in ϕ + f than in ϕ + f. This is backed by significant pairwise contrasts between ϕ + f and ϕ + f (p < 0.05 in all cases but one). Second, summation gains in the discrimination task are larger than in the detection task in all cases. Contour discrimination benefits more from cue combination than detection. 
Joint analysis of the double cue benefit
Results so far suggest that spatial frequency cues of identical perceptual salience interact differently with contour integration. The magnitude of the cue summation gain depends on the direction of spatial frequency contrast. All single cue saliences being equal, a shift of the carrier frequency of already co-aligned contour elements to lower spatial frequencies increases contour visibility considerably more than a shift to higher spatial frequencies. Note again that our use of the term “lower spatial frequencies” does not refer to low spatial frequency bands in absolute terms but to the relative difference between contour elements and background elements. 
This differential cue summation benefit can be quantified with respect to the prediction of dimensional orthogonality according to (7) and analyzed by ANOVA procedures. Feature combination (ϕ + f and ϕ + f) and visibility level (1 through 4) served as within-subjects factors in two separate repeated measures ANOVAs for detection and discrimination. Effects are depicted in Figure 5. A main effect was observed for feature combination in both the detection, F(1, 34) = 39.3, p < 0.001, and the discrimination, F(1, 34) = 77.4, p < 0.001) task, reflecting the different magnitudes of the double cue benefit between ϕ + f and ϕ + f. Both feature combinations exceed the performance predicted from dimensional orthogonality in all cases (gray area in Figure 5). Cue summation gains increase with single cue sensitivity until limited by ceiling effects. The combination of orientation co-alignment with lower spatial frequencies (ϕ + f), however, yields markedly larger summation gains than with higher spatial frequencies (ϕ + f). Post-hoc comparisons for ϕ + f against ϕ + f on each visibility level are significant in the detection task of both experiments (p < 0.05 in five out of eight comparisons) as well as in the discrimination task (all p < 0.01). No main effect was found for experiment, indicating similar overall levels of cue summation gains in both experiments. Finally, there is a main effect of visibility level in both detection, F(3, 102) = 3.6, p < 0.05, and discrimination, F(3, 102) = 18.0, p < 0.001. Figure 5 strongly suggests that summation gains rise with visibility level and thus feature contrast level, particularly in the discrimination task. 
Figure 5
 
Sensitivity difference Δ d ′ according to (7). The gray area denotes the prediction derived from the integration of independent cues, d ′ ⊥ .
Figure 5
 
Sensitivity difference Δ d ′ according to (7). The gray area denotes the prediction derived from the integration of independent cues, d ′ ⊥ .
Spatial frequency feature contrasts
This positive association between feature contrast level and summation gain hints at a rather trivial explanation for the discrepancy between conditions with upward and downward spatial frequency contrast. If observers required larger absolute feature contrasts for f conditions than for f conditions in order to establish identical visibility levels among the single cues, the different cue summation benefits might be a mere consequence of physical stimulus intensity. 
To test this, we analyzed spatial frequency contrasts from both of our main experiments across all observers in octave metric. Figure 6 summarizes the absolute contrast values for each single cue visibility level, averaged across observers. On each level, conditions with downward feature contrast f consistently require lower feature contrast than conditions with upward feature contrast f. Given the fact that ϕ + f conditions exhibited larger cue summation benefits than ϕ + f, this means that less feature contrast is required to yield more cue summation gain in ϕ + f conditions. 
Figure 6
 
Spatial frequency contrasts from both experiments.
Figure 6
 
Spatial frequency contrasts from both experiments.
Discussion
We combined orientation collinearity with spatial frequency contrast to define target contours in a contour detection and discrimination task. Spatial frequency contrasts were realized either as increments or decrements relative to the background spatial frequency. Our body of results can be summarized by three points: (a) The cue summation gain is larger for contours with lower spatial frequencies than with higher spatial frequencies compared to the background, (b) performance with double cues was greater compared to both single cue performance as well as the prediction derived from independent processing of salience components for both the discrimination and the detection task, and (c) the cue summation gain was larger for discrimination than for detection. 
Advantage of lower spatial frequencies
On all visibility levels and in both experiments, observers needed less feature contrast for target contours with lower spatial frequencies to reach the same detection performance as for targets with higher spatial frequencies relative to the background. In f conditions observers required a 0.2–0.3 octave shift on average to reach a 75% correct detection level. In f conditions about 0.3–0.35 octaves were necessary to reach this performance level. Note that these figures comply with the range of thresholds for spatial frequency discrimination (Wilson & Gelb, 1984; Wilson, McFarlane, & Phillips, 1983) and also the known detection asymmetry in favor of spatial frequency decrements (Regan & Beverley, 1983). Moreover, contour salience enhancement, expressed in terms of cue summation gain, is much stronger when orientation collinearity is combined with low spatial frequency deviants than with high spatial frequency deviants. Taken together, less feature contrast is required to generate a higher cue summation gain in ϕ + f conditions than in ϕ + f conditions. 
Before considering more complex explanations, we shall discuss low-level stimulus characteristics as well as basic perceptual properties to account for this anisotropy. It has been shown that contrast sensitivity hinges on eccentricity (Robson & Graham, 1981) and also on spatial frequency, for which it is greatest at low spatial frequencies, particularly at brief exposure durations (Robson, 1966). This puts perceived contrast as a possible cause for the anisotropies found in our data. We would argue against any significant effect of contrast sensitivity on several grounds. Both spatial frequency cues (f and f) were calibrated and verified to be perceptually equivalent on each visibility level, which should at least partially counterbalance different contrast sensitivities high versus low spatial frequency stimuli. Further, our reference spatial frequency of 4.25 cpd was chosen to be just in the middle of the known contrast sensitivity plateau between 3 and 5 cpd for the age group of our observers (Owsley, Sekuler, & Siemsen, 1983) and across a vast range of eccentricities from 0 up to 7.5 (Rovamo, Virsu, & Nasanen, 1978). In addition, many of the experiments that reveal considerable modulation of contrast sensitivity based on spatiotemporal stimulus properties, including the ones mentioned before (i.e., Owsley et al., 1983; Robson, 1966; Robson & Graham, 1981; Rovamo et al., 1978), were conducted as threshold contrast measurements. Our stimulus contrasts, however, are substantially above threshold (see Methods). Suprathreshold contrast matching experiments show that human contrast sensitivity is almost invariant over a wide range of spatial frequencies (Peli, Arend, & Labianca, 1996), including the spatial frequency ranges realized in our experiments. As for the brief exposure duration, matching contrasts for suprathreshold stimuli are level over a wide range of timings and do not vary substantially across the spatial frequency range used here (Georgeson, 1987). Also, Gabor element root mean square (RMS) contrast is practically constant across the range of spatial frequencies at our chosen Gabor element size. Finally, across all observers and visibility levels, spatial frequencies along target contours ranged from 4.43 cpd to 6.84 cpd for upward feature contrasts and 2.87 cpd to 4.13 cpd in downward direction. With Gabor carrier spatial frequencies of contour and background elements thus differing by no more than 0.68 octaves in all experimental conditions, bandwidths were confined within reasonable intervals and contour integration performance is known to remain level between 1.6 cpd and at least 6.4 cpd (Dakin & Hess, 1998). In sum, we believe that the asymmetry in cue summation gain between low and high spatial frequency contours is not easily accounted for by confounds in physical stimulus properties (i.e., bandwidth or RMS contrast), or early properties of the visual system such as contrast sensitivity. 
Low spatial frequencies have been ascribed with advantages in the attentional prerequisites of processing. They are much more effective in capturing spatial attention for the regions (Shulman & Wilson, 1987) and features (Maunsell & Treue, 2006) of interest. Contours defined by combinations of orientation collinearity with low spatial frequency deviants may therefore have advantageous certainty conditions and be more easily detected in a random display. With increased spatial and spatial frequency certainty, the local association field could operate more effectively to establish a global contour percept. Related evidence comes from research on the global precedence effect in visual processing (Navon, 1977) showing that the representation of a visual scene is not acquired all at once but on different temporal and spatial scales ranging from a rapid gist of the scene to the extraction of fine details (Rasche & Koch, 2002). Although the visual system is highly adaptive in its use of different spatial scales for scene representation (Kimchi, 1992), the fast part of global scene processing is often believed to rely more on low spatial frequencies, as was shown by neuroanatomical (Sugase, Yamane, Ueno, & Kawano, 1999; Tamura & Tanaka, 2001), psychophysical (Loftus & Harley, 2004; Schyns & Oliva, 1994), and computational studies (Rodrigues & Buf, 2006). 
The special role of low spatial frequencies for scene processing, however, cannot account for our full body of results. In absolute terms, we did not limit target contours to low spatial frequency bands. Whereas in our first experiment the lowest spatial frequencies among contour elements were indeed only around 2.5 cpd, the second experiment saw contour elements at a constant range of spatial frequencies with a rather midscale median of 4.25 cpd. We found cue combination gains to be large throughout both experiments, albeit largest when contour elements had not a low, but a lower spatial frequency than background elements regardless of absolute value ranges. Particularly notable in this regard is Experiment two. There, the feature values of contour elements were constant over the course of the whole experiment while only the background changed. We obtained markedly different cue summation gains depending on whether the spatial frequency of background elements was shifted upward or downward. The fact that identical contour properties in different surrounds yield different summation gains is not easily reconciled with the notion of a purely local contour integration mechanism, which acts on element features such as orientation and spatial frequency alone. The observed asymmetry in cue summation gains indicates that contours are most effectively extracted when they constitute the lower spatial frequency parts of a scene. The finding falls in line with newer theories on the interplay between spatial scales during scene gist understanding (Oliva & Torralba, 2006). The focus here lies not on specific spatial frequency bands but rather on configurations, or relations, of spatial scales to guide local feature analysis and facilitate object recognition in cluttered surrounds. This may place contour integration more likely on later neuronal areas, which is corroborated by studies showing that neuronal correlates of contour integration emerge on LOC rather than early visual sites like V1 (Gilad et al., 2013; Shpaner et al., 2013). 
Magnitude of cue summation gains
Our results show that the combination of orientation and spatial frequency is highly effective to enhance contour salience. The cue summation gain tends to increase with single cue sensitivity until limited by ceiling effects. This is consistent with previous results suggesting a nonlinear dependency between single cue salience and the resulting cue summation gain (Meinhardt et al., 2004; Persike & Meinhardt, 2006). When single cue visibilities are too close to absolute threshold or too high, cue summation gains decrease. More importantly, cue summation gains are oversummative on all visibility levels and for all realized combinations of single cues, meaning that it is larger than what can be expected from independent processing of collinearity and spatial frequency cues. Especially for contours with lower spatial frequency than the background, summation gains are not just oversummative but “super-linear” in almost all cases, meaning that sensitivity for combined cues is larger than the unweighted linear sum of both individual sensitivities. Super-linear cue combination indicates a nonlinear integration rule and the involvement of dedicated mechanisms for the combination of cues. 
It has been argued that super-linear summation gains could be caused by low-level mechanisms for bottom-up salience, possibly situated as early as V1 (Zhaoping & Zhe, 2012). V1 has lateral interconnections that span preferentially between neurons with similar preferred orientations which are also selectively tuned to spatial frequency (for a review, see Ng, Bharath, & Zhaoping, 2007). Collinear facilitation between neighboring V1 neurons with similar orientation and spatial frequency tuning has been shown to elicit larger responses to co-aligned contour segments than to the background elements (Kapadia, Ito, Gilbert, & Westheimer, 1995; Nelson & Frost, 1985) and have been proposed to be the substrate for contour integration (Li & Gilbert, 2002). Moreover, a subset of intracortical interactions in V1 is such that neighboring V1 neurons tuned to similar spatial frequencies inhibit each other (Li & Li, 1994). If the contour elements have a different spatial frequency than the background, mutual inhibition between the few neurons encoding a contour should therefore be less than among the many background elements. Consequently, facilitation by collinearity and reduced inhibition due to spatial frequency contrast might lead to elevated V1 responses and even create a bottom-up salience map (Li, 2002) where responses to the combination of two cues can be super-linear when there are conjunctive cells tuned to both features (Koene & Zhaoping, 2007; Zhaoping & Zhe, 2012). 
Research on the spatial frequency tuning of contour integration offers a second explanation for the observed super-linear cue combination effects right on the other end of the visual hierarchy. Contour integration has been shown to work best when the collinear Gabor elements along the contour share a common spatial frequency (Dakin & Hess, 1998). With increasing spatial frequency disparity among contour elements, contour detection performance decreases. If our spatial frequency manipulations were sufficient to shift contour elements far away from background elements in terms of frequency bands, contour integration could proceed with much less interference from surrounding elements. This could cause effects particularly on the decision stage of visual processing (Ashby & Townsend, 1986). Due to the random sampling of orientations among background elements, multiple adjacent elements in a contour integration stimulus may assume collinear orientations just by accident and thus form possible contour candidates, the so-called false positives (Tversky, Geisler, & Perry, 2004). Response errors may therefore stem from either the failure to detect any strings of adjacent collinear elements in a given stimulus, or from the detection of accidental contours in the background. The second feature cue might, in such cases, help to disambiguate between the actual target contour and accidental background contours. Increased performance with double cues would then result from a reduced likelihood of accidental contour formation among background elements. 
Though decisional effects as well as V1 contribution may play a role, two points refute the hypothesis that our super-linear cue combination gains can be attributed to either of these stages alone. First, although contour integration is adversely affected by spatial frequency disparities, the latter must reach rather high levels in order to make a significant impact (Dakin & Hess, 1999). Contour integration remains highly operational even at spatial frequency differences of more than two octaves among contour elements (Persike et al., 2009). The spatial frequency contrasts in our experiments were much lower and probably insufficient to effectively separate the spatial frequency bands of contour and background elements, especially when taking into account the random jitter of 0.35 octaves of both contour and background elements. Second, and more importantly, neither the bottom-up salience account nor does a disambiguation on the decision stage explain the marked asymmetry in cue summation gains between the detection and discrimination task. 
Cue summation difference for detection and identification
The asymmetry between contour detection and discrimination performance with respect to both single cue performance as well as cue summation gains provide support for the involvement of higher visual areas. Disparities between target detection and conscious perception of object shape have been found in numerous studies. Detection performance for objects can significantly exceed categorization (Bowers & Jones, 2008; Sagi & Julesz, 1984), the time course of object detection and categorization may be manipulated selectively (Mack, Gauthier, Sadr, & Palmeri, 2008), visual adaptation affects detection and identification performance differently (Hillis & Brainard, 2007), and activation in V1 has tight couplings with detection performance while identification correlates more with activation in later areas like LOC and the collateral sulcus (Straube & Fahle, 2011). Our results not only show a difference between detection and identification performance as such, but also a marked anisotropy regarding the magnitude of cue summation gains in both tasks. In both experiments and all conditions, we found contour discrimination to benefit more than detection from cue combination. Similar results have been reported in previous research on the combination of multiple cues in figure-ground segregation (Meinhardt et al., 2006; Persike & Meinhardt, 2008) and correspond to neuroimaging studies which show that activation of higher ventral regions responsible for shape representation reacts stronger to the combination of features than does activation of early retinotopic areas (Altmann et al., 2003). Even if super-linear salience gains are generated on V1 (see above) to aid detection, we propose that a second mechanism benefits even stronger from the conjunction of cues. This mechanism accomplishes contour completion and shape discrimination on a larger scale. Shape representation, independent of the type of feature cue and invariant to size and location, is found in higher ventral areas, such as the parietal cortex, inferio-temporal cortex (IT), and the LOC (Kourtzi & Huberle, 2005; Kourtzi & Kanwisher, 2000; Lerner, Hendler, Ben-Bashat, Harel, & Malach, 2001). The idea of a higher-level neural implementation of cue combination is backed by the first-ever study to find an electrophysiological correlate of feature synergy, starting no sooner than 130 ms after stimulus onset in IT and trickling down to earlier visual areas from there (Kida, Tanaka, Takeshima, & Kakigi, 2011). Moreover, the LOC responds to perceived global shape and is particularly sensitive to contour lines of objects. LOC responses are not modulated by the familiarity of objects, suggesting a stimulus driven analysis without reference to stored knowledge about specific object form (Kourtzi & Kanwisher, 2000). BOLD responses in V1 and V2 are strongly modulated by a change of local element orientation, but hardly by a change in global shape. Vice versa, the LOC responds most strongly to a change of global form, and moderately to local feature change (Kourtzi & Huberle, 2005). This ties in with recent evidence that contour-related effects found in early visual areas like V1 may not be the source of contour integration but a mere epiphenomenon (Chen et al., 2014). The initial peak of contour-related activation was found on V4, followed by congruent activation in V1, probably evoked by feedback connections from V4. 
Taken together, our findings agree with a distributed, predominantly higher-level model of cue combination in contour integration (Gilad et al., 2013). A realistic model of human contour integration should exploit multiple sources of contour salience. It comprises robust implementations for feature contrast detection as well as collinearity based element grouping, has a preference for lower spatial frequencies, and involves active selection and guidance mechanisms apt to optimize the combination of features in the spatial binding process. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Malte Persike. 
Email: persike@uni-mainz.de. 
Address: Psychological Institute, Johannes Gutenberg University, Mainz, Germany. 
References
Altmann C. F., Bülthoff H. H., Kourtzi Z. (2003). Perceptual organization of local elements into global shapes in the human visual cortex. Current Biology, 13 (4), 342–349, doi:10.1016/s0960-9822(03)00052-6.
Angelucci A., Levitt J. B., Walton E. J., Hupe J. M., Bullier J., Lund J. S. (2002). Circuits for local and global signal integration in primary visual cortex. Journal of Neuroscience, 22 (19), 8633–8646.
Ashby F. G., Townsend J. T. (1986). Varieties of perceptual independence. Psychological Review, 93 (2), 154–179.
Bauer R., Heinze S. (2002). Contour integration in striate cortex. Classic cell responses or cooperative selection? Experimental Brain Research, 147 (2), 145–152, doi:10.1007/s00221-002-1178-6.
Beaudot W. H., Mullen K. T. (2003). How long range is contour integration in human color vision? Visual Neuroscience, 20 (1), 51–64.
Bex P. J., Simmers A. J., Dakin S. C. (2001). Snakes and ladders: the role of temporal modulation in visual contour integration. Vision Research, 41 (27), 3775–3782, doi:10.1016/S0042-6989(01)00222-X.
Bosking W. H., Zhang Y., Schofield B., Fitzpatrick D. (1997). Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. Journal of Neuroscience, 17 (6), 2112–2127.
Bowers J. S., Jones K. W. (2008). Detecting objects is easier than categorizing them. Quarterly Journal of Experimental Psychology (Hove), 61 (4), 552–557, doi:10.1080/17470210701798290.
Braun J. (1999). On the detection of salient contours. Spatial Vision, 12 (2), 211–225.
Carrasco M., McLean T. L., Katz S. M., Frieder K. S. (1998). Feature asymmetries in visual search: Effects of display duration, target eccentricity, orientation and spatial frequency. Vision Research, 38 (3), 347–374.
Chen C. M., Lakatos P., Shah A. S., Mehta A. D., Givre S. J., Javitt D. C., et al. (2007). Functional anatomy and interaction of fast and slow visual pathways in macaque monkeys. Cerebral Cortex, 17 (7), 1561–1569.
Chen M. G., Yan Y., Gong X. J., Gilbert C. D., Liang H. L., Li W. (2014). Incremental integration of global contours through interplay between visual cortical areas. Neuron, 82 (3), 682–694, doi:10.1016/j.neuron.2014.03.023.
Christman S., Kitterle F. L., Hellige J. (1991). Hemispheric asymmetry in the processing of absolute versus relative spatial frequency. Brain and Cognition, 16 (1), 62–73.
Dakin S. C., Hess R. F. (1998). Spatial-frequency tuning of visual contour integration. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 15 (6), 1486–1499.
Dakin S. C., Hess R. F. (1999). Contour integration and scale combination processes in visual edge detection. Spatial Vision, 12 (3), 309–327, doi:10.1163/156856899x00184.
De Valois R. L., Albrecht D. G., Thorell L. G. (1982). Spatial frequency selectivity of cells in macaque visual cortex. Vision Research, 22 (5), 545–559.
Ernst U. A., Mandon S., Schinkel-Bielefeld N., Neitzel S. D., Kreiter A. K., Pawelzik K. R. (2012). Optimality of human contour integration. PLoS Computational Biology, 8 (5), e1002520, doi:10.1371/journal.pcbi.1002520.
Field D. J., Hayes A., Hess R. F. (1993). Contour integration by the human visual system: Evidence for a local “association field.” Vision Research, 33 (2), 173–193.
Field D. J., Hayes A., Hess R. F. (1997). The role of phase and contrast polarity in contour integration. Investigative Ophthalmology & Visual Science, 38 (4), 4643.
Georgeson M. A. (1987). Temporal properties of spatial contrast vision. Vision Research, 27 (5), 765–780.
Gilad A., Meirovithz E., Slovin H. (2013). Population responses to contour integration: Early encoding of discrete elements and late perceptual grouping. Neuron, 78 (2), 389–402, doi:10.1016/j.neuron.2013.02.013.
Green D. M., Swets J. A. (1988). Signal detection theory and psychophysics. Los Altos, CA: Wiley.
Han S., Yund E. W., Woods D. L. (2003). An ERP study of the global precedence effect: The role of spatial frequency. Clinical Neurophysiology, 114 (10), 1850–1865.
Hansen B. C., Hess R. F. (2006). The role of spatial phase in texture segmentation and contour integration. Journal of Vision, 6 (5): 5, 594–615, http://www.journalofvision.org/content/6/5/5, doi:10.1167/6.5.5. [PubMed] [Article]
Hansen T., Neumann H. (2008). A recurrent model of contour integration in primary visual cortex. Journal of Vision, 8 (8): 8, 1–25, http://www.journalofvision.org/content/8/8/8, doi:10.1167/8.8.8. [PubMed] [Article]
Hess R. F., Beaudot W. H., Mullen K. T. (2001). Dynamics of contour integration. Vision Research, 41 (8), 1023–1037.
Hess R. F., Dakin S. C. (1997). Absence of contour linking in peripheral vision. Nature, 390 (6660), 602–604.
Hess R. F., Field D. (1999). Integration of contours: New insights. Trends in Cognitive Sciences, 3 (12), 480–486.
Hess R. F., Hayes A., Field D. J. (2003). Contour integration and cortical processing. Journal of Physiology-Paris, 97 (2-3), 105–119., doi:10.1016/j.jphysparis.2003.09.013.
Hess R. F., Hayes A., Kingdom F. A. (1997). Integrating contours within and through depth. Vision Research, 37 (6), 691–696.
Hillis J. M., Brainard D. H. (2007). Distinct mechanisms mediate visual detection and identification. Current Biology, 17 (19), 1714–1719, doi:10.1016/j.cub.2007.09.012.
Huang P. C., Hess R. F., Dakin S. C. (2006). Flank facilitation and contour integration: Different sites. Vision Research, 46 (21), 3699–3706.
Hübner R. (1997). The effect of spatial frequency on global precedence and hemispheric differences. Perception & Psychophysics, 59 (2), 187–201.
Johnston E. B., Cumming B. G., Landy M. S. (1994). Integration of stereopsis and motion shape cues. Vision Research, 34 (17), 2259–2275.
Kapadia M. K., Ito M., Gilbert C. D., Westheimer G. (1995). Improvement in visual sensitivity by changes in local context: Parallel studies in human observers and in v1 of alert monkeys. Neuron, 15 (4), 843–856.
Kida T., Tanaka E., Takeshima Y., Kakigi R. (2011). Neural representation of feature synergy. NeuroImage, 55 (2), 669–680.
Kimchi R. (1992). Primacy of wholistic processing and global/local paradigm: A critical review. Psychological Bulletin, 112 (1), 24–38.
Koene A. R., Zhaoping L. (2007). Feature-specific interactions in salience from combined feature contrasts: Evidence for a bottom-up salience map in v1. Journal of Vision, 7 (7): 6, 1–14, http://www.journalofvision.org/content/7/6/6, doi:10.1167/7.7.6. [PubMed] [Article]
Kourtzi Z., Huberle E. (2005). Spatiotemporal characteristics of form analysis in the human visual cortex revealed by rapid event-related fMRI adaptation. NeuroImage, 28 (2), 440–452.
Kourtzi Z., Kanwisher N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20 (9), 3310–3318.
Kubovy M., Cohen D. J., Hollier J. (1999). Feature integration that routinely occurs without focal attention. Psychonomic Bulletin & Review, 6 (2), 183–203.
Ledgeway T., Hess R. F., Geisler W. S. (2005). Grouping local orientation and direction signals to extract spatial contours: Empirical tests of “association field” models of contour integration. Vision Research, 45 (19), 2511–2522.
Lee D. T., Schachter B. J. (1980). Two algorithms for constructing a Delaunay triangulation. International Journal of Parallel Programming, 9 (3), 219–242.
Legge G. E. (1978). Sustained and transient mechanisms in human vision: Temporal and spatial properties. Vision Research, 18 (1), 69–81.
Lerner Y., Hendler T., Ben-Bashat D., Harel M., Malach R. (2001). A hierarchical axis of object processing stages in the human visual cortex. Cerebral Cortex, 11 (4), 287–297.
Li C. Y., Li W. (1994). Extensive integration field beyond the classical receptive field of cat's striate cortical neurons-classification and tuning properties. Vision Research, 34 (18), 2337–2355.
Li W., Gilbert C. D. (2002). Global contour salience and local colinear interactions. Journal of Neurophysiology, 88 (5), 2846–2856.
Li Z. (1998). A neural model of contour integration in the primary visual cortex. Neural Computation, 10 (4), 903–940.
Li Z. (2002). A salience map in primary visual cortex. Trends in Cognitive Sciences, 6 (1), 9–16.
Loffler G. (2008). Perception of contours and shapes: Low and intermediate stage mechanisms. Vision Research, 48 (20), 2106–2127, doi:10.1016/j.visres.2008.03.006.
Loftus G. R., Harley E. M. (2004). How different spatial-frequency components contribute to visual information acquisition. Journal of Experimental Psychology: Human Perception and Performance, 30 (1), 104–118.
Machilsen B., Wagemans J. (2011). Integration of contour and surface information in shape detection. Vision Research, 51 (1), 179–186, doi:S0042-6989(10)00558-4 [pii] 10.1016/j.visres.2010.11.005.
Mack M. L., Gauthier I., Sadr J., Palmeri T. J. (2008). Object detection and basic-level categorization: Sometimes you know it is there before you know what it is. Psychonomic Bulletin & Review, 15 (1), 28–35.
Malach R., Amir Y., Harel M., Grinvald A. (1993). Relationship between intrinsic connections and functional architecture revealed by optical imaging and in vivo targeted biocytin injections in primate striate cortex. Proceedings of the National Academy of Sciences, USA, 90 (22), 10469–10473.
Marquardt D. (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 11 (2), 431–441.
Maunsell J. H., Treue S. (2006). Feature-based attention in visual cortex. Trends in Neurosciences, 29 (6), 317–322., doi:10.1016/j.tins.2006.04.001.
McMillan N. A., Creelman C. D. (2005). Detection theory: A user's guide. London: Erlbaum.
Meinhardt G., Persike M. (2003). Strength of feature contrast mediates interaction among feature domains. Spatial Vision, 16 (5), 459–478.
Meinhardt G., Persike M., Mesenholl B., Hagemann C. (2006). Cue combination in a combined feature contrast detection and figure identification task. Vision Research, 46 (23), 3977–3993.
Meinhardt G., Schmidt M., Persike M., Röers B. (2004). Feature synergy depends on feature contrast and objecthood. Vision Research, 44 (16), 1843–1850.
Navon D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9 (3), 353–383.
Nelson J. I., Frost B. J. (1985). Intracortical facilitation among co-oriented, co-axially aligned simple cells in cat striate cortex. Experimental Brain Research, 61 (1), 54–61.
Ng J., Bharath A. A., Zhaoping L. (2007). A survey of architecture and function of the primary visual cortex (v1). EURASIP Journal on Advances in Signal Processing, 097961, doi:10.1155/2007/97961.
Nothdurft H. C. (2000). Salience from feature contrast: Additivity across dimensions. Vision Research, 40 (10-12), 1183–1201.
Oliva A., Torralba A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155, 23–36, doi:10.1016/S0079-6123(06)55002-2.
Owsley C., Sekuler R., Siemsen D. (1983). Contrast sensitivity throughout adulthood. Vision Research, 23 (7), 689–699.
Peli E., Arend L., Labianca A. T. (1996). Contrast perception across changes in luminance and spatial frequency. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 13 (10), 1953–1959.
Persike M., Meinhardt G. (2006). Synergy of features enables detection of texture defined figures. Spatial Vision, 19 (1), 77–102.
Persike M., Meinhardt G. (2008). Cue summation enables perceptual grouping. Journal of Experimental Psychology: Human Perception and Performance, 34 (1), 1–26.
Persike M., Olzak L. A., Meinhardt G. (2009). Contour integration across spatial frequency. Journal of Experimental Psychology: Human Perception and Performance, 35 (6), 1629–1648, doi:10.1037/a0016473.
Pettet M. W. (1999). Shape and contour detection. Vision Research, 39 (3), 551–557.
Rasche C., Koch C. (2002). Recognizing the gist of a visual scene: Possible perceptual and neural mechanisms. Neurocomputing, 44, 979–984.
Regan D., Beverley K. I. (1983). Spatial-frequency discrimination and detection: Comparison of postadaptation thresholds. Journal of the Optical Society of America, 73 (12), 1684–1690.
Robson J. G. (1966). Spatial and temporal contrast-sensitivity functions of visual system. Journal of the Optical Society of America, 56 (8), 1141–1142, doi:10.1364/Josa.56.001141.
Robson J. G., Graham N. (1981). Probability summation and regional variation in contrast sensitivity across the visual field. Vision Research, 21 (3), 409–418.
Rodrigues J., Buf J.M. du . (2006). Multi-scale keypoints in V1 and beyond: Object segregation, scale selection, salience maps and face detection. Biosystems Journal, 86 (1-3), 75–90.
Rovamo J., Virsu V., Nasanen R. (1978). Cortical magnification factor predicts the photopic contrast sensitivity of peripheral vision. Nature, 271 (5640), 54–56.
Saarela T. P., Landy M. S. (2012). Combination of texture and color cues in visual segmentation. Vision Research, 58, 59–67, doi:10.1016/j.visres.2012.01.019.
Sagi D. (1988). The combination of spatial frequency and orientation is effortlessly perceived. Perception & Psychophysics, 43 (6), 601–603.
Sagi D., Julesz B. (1984). Detection versus discrimination of visual orientation. Perception, 13 (5), 619–628.
Schmidt K. E., Goebel R., Lowel S., Singer W. (1997). The perceptual grouping criterion of colinearity is reflected by anisotropies of connections in the primary visual cortex. European Journal of Neuroscience, 9 (5), 1083–1089.
Schyns P. G., Oliva A. (1994). From blobs to boundary edges: Evidence for time and spatial scale dependent scene recognition. Psychological Science, 5 (4), 195–200.
Sergent J., Hellige J. B. (1986). Role of input factors in visual-field asymmetries. Brain and Cognition, 5 (2), 174–199, doi:10.1016/0278-2626(86)90054-0.
Shpaner M., Molholm S., Forde E., Foxe J. J. (2013). Disambiguating the roles of area V1 and the lateral occipital complex (LOC) in contour integration. NeuroImage, 69, 146–156, doi:10.1016/j.neuroimage.2012.11.023.
Shulman G. L., Wilson J. (1987). Spatial frequency and selective attention to local and global information. Perception, 16 (1), 89–101.
Stettler D. D., Das A., Bennett J., Gilbert C. D. (2002). Lateral connectivity and contextual interactions in macaque primary visual cortex. Neuron, 36 (4), 739–750.
Straube S., Fahle M. (2010). The electrophysiological correlate of salience: Evidence from a figure-detection task. Brain Research, 1307, 89–102, doi:10.1016/j.brainres.2009.10.043.
Straube S., Fahle M. (2011). Visual detection and identification are not the same: Evidence from psychophysics and fMRI. Brain and Cognition, 75 (1), 29–38, doi:10.1016/j.bandc.2010.10.004.
Sugase Y., Yamane S., Ueno S., Kawano K. (1999). Global and fine information coded by single neurons in the temporal visual cortex. Nature, 400 (6747), 869–873.
Tamura H., Tanaka K. (2001). Visual response properties of cells in the ventral and dorsal parts of the macaque inferotemporal cortex. Cerebral Cortex, 11 (5), 384–399.
Tanner W. P. (1956). Theory of recognition. Journal of the Optical Society of America, 28, 882–888.
To M. P., Baddeley R. J., Troscianko T., Tolhurst D. J. (2011). A general rule for sensory cue summation: Evidence from photographic, musical, phonetic and cross-modal stimuli. Proceedings of the Royal Society B: Biological Sciences, 278 (1710), 1365–1372, doi:10.1098/rspb.2010.1888.
Tolhurst D. J. (1975). Reaction-times in detection of gratings by human observers: A probabilistic mechanism. Vision Research, 15 (10), 1143–1149, doi:10.1016/0042-6989(75)90013-9.
Tversky T., Geisler W. S., Perry J. S. (2004). Contour grouping: Closure effects are explained by good continuation and proximity. Vision Research, 44 (24), 2769–2777.
Vassilev A., Mitov D. (1976). Perception time and spatial frequency. Vision Research, 16 (1), 89–92, doi:10.1016/0042-6989(76)90081-X.
Wilson H. R., Gelb D. J. (1984). Modified line-element theory for spatial-frequency and width discrimination. Journal of the Optical Society of America A, 1 (1), 124–131.
Wilson H. R., McFarlane D. K., Phillips G. C. (1983). Spatial frequency tuning of orientation selective units estimated by oblique masking. Vision Research, 23 (9), 873–882.
Zhaoping L., Zhe L. (2012). Properties of v1 neurons tuned to conjunctions of visual features: application of the v1 salience hypothesis to visual search behavior. PLoS One, 7 (6), e36223, doi:10.1371/journal.pone.0036223.
Figure 1
 
Construction of stimulus displays. Depicted are the initial hexagonal grid with 225 element positions with a rectangle indicating the possible area of contour placement (1), the addition of a circular or S-shaped contour (2), the spatial diffusion of background element positions (3), and the placement of stimulus patches onto the element positions (4).
Figure 1
 
Construction of stimulus displays. Depicted are the initial hexagonal grid with 225 element positions with a rectangle indicating the possible area of contour placement (1), the addition of a circular or S-shaped contour (2), the spatial diffusion of background element positions (3), and the placement of stimulus patches onto the element positions (4).
Figure 2
 
Stimuli used in the experiments. Depicted are examples for two of the four visibility levels used in the experiments. Only the central stimulus region comprising the contour is shown here. Single cue target contours were defined by orientation alignment (ϕ) or spatial frequency feature contrast in either upward (f) or downward (f) direction relative to the background. Double cue targets were generated by combining the orientation alignment cue with each of the feature contrast cues. Spatial frequency contrast was established by shifting the spatial frequency of (A) contour elements (Experiment 1) or (B) background elements (Experiment 2) in upward or downward direction.
Figure 2
 
Stimuli used in the experiments. Depicted are examples for two of the four visibility levels used in the experiments. Only the central stimulus region comprising the contour is shown here. Single cue target contours were defined by orientation alignment (ϕ) or spatial frequency feature contrast in either upward (f) or downward (f) direction relative to the background. Double cue targets were generated by combining the orientation alignment cue with each of the feature contrast cues. Spatial frequency contrast was established by shifting the spatial frequency of (A) contour elements (Experiment 1) or (B) background elements (Experiment 2) in upward or downward direction.
Figure 3
 
Summary of main effects in Experiment 1. The Figure depicts mean d′ and proportion correct for feature contrast detection (black circles) and shape discrimination (gray squares) for contours defined by orientation (ϕ), upward or downward spatial frequency contrast (f and f), and the double cues (ϕ + f and ϕ + f). Data are shown for the four visibility levels. Error bars denote 95% confidence limits of the mean, based on the standard error of measurement of each cell. The summation gain, as expressed by q-values (6), is calculated only for the detection task. Note that single cue performance (ϕ, f, and f) is computed from data obtained in the main experiment. It may thus exhibit slight deviations from the visibility levels estimated during calibration (see Methods).
Figure 3
 
Summary of main effects in Experiment 1. The Figure depicts mean d′ and proportion correct for feature contrast detection (black circles) and shape discrimination (gray squares) for contours defined by orientation (ϕ), upward or downward spatial frequency contrast (f and f), and the double cues (ϕ + f and ϕ + f). Data are shown for the four visibility levels. Error bars denote 95% confidence limits of the mean, based on the standard error of measurement of each cell. The summation gain, as expressed by q-values (6), is calculated only for the detection task. Note that single cue performance (ϕ, f, and f) is computed from data obtained in the main experiment. It may thus exhibit slight deviations from the visibility levels estimated during calibration (see Methods).
Figure 4
 
Summary of main effects in Experiment 2. Conventions as in Figure 3.
Figure 4
 
Summary of main effects in Experiment 2. Conventions as in Figure 3.
Figure 5
 
Sensitivity difference Δ d ′ according to (7). The gray area denotes the prediction derived from the integration of independent cues, d ′ ⊥ .
Figure 5
 
Sensitivity difference Δ d ′ according to (7). The gray area denotes the prediction derived from the integration of independent cues, d ′ ⊥ .
Figure 6
 
Spatial frequency contrasts from both experiments.
Figure 6
 
Spatial frequency contrasts from both experiments.
Table 1
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. The table shows the base sensitivity level, Image not available, mean sensitivity for double-cue targets, Image not available, mean sensitivity difference, Image not available, and the ratio of double-cue and single-cue performance, Image not available. Data are shown for both tasks and figure types at the four feature contrast levels.
Table 1
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. The table shows the base sensitivity level, Image not available, mean sensitivity for double-cue targets, Image not available, mean sensitivity difference, Image not available, and the ratio of double-cue and single-cue performance, Image not available. Data are shown for both tasks and figure types at the four feature contrast levels.
Table 2
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. Conventions are as in Table 1.
Table 2
 
Sensitivity advantage of double-cue targets compared to the base sensitivity level. Conventions are as in Table 1.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×