Free
Article  |   June 2012
Orientation coherence sensitivity
Author Affiliations
Journal of Vision June 2012, Vol.12, 18. doi:https://doi.org/10.1167/12.6.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jesse S. Husk, Pi-Chun Huang, Robert F. Hess; Orientation coherence sensitivity. Journal of Vision 2012;12(6):18. https://doi.org/10.1167/12.6.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  We developed a global orientation coherence task for the assessment of global form processing along similar lines to the global motion coherence task. The task involved judgments of global orientation for an array of limited duration 1-D Gabors, some of which were signal (signal orientation) and some of which were noise (random orientation). We address two issues. First: Do motion and form global processing have similar dependencies? And second: Can global sensitivity be explained solely in terms of integrative function? While most dependencies (e.g., contrast, spatial scale, and field size) are similar for form and motion processing, there is a greater dependence on eccentricity for form processing. Sensitivity for global tasks involves more than just integration by filters broadly tuned for orientation. Results are best modeled by filters with narrowband orientation tuning that effectively segregate as well as integrate global information.

Introduction
Our visual abilities involve not only the detection of localized low contrast static or moving stimuli, but also the integration of form and motion information that extends across space. This latter ability is often referred to as global processing and has been studied more extensively in terms of motion processing (Hess & Aaen-Stockdale, 2008; Morgan & Ward, 1980; Newsome & Pare, 1988; Williams & Brannan, 1994; Williams & Sekuler, 1984). Global motion stimuli consist of an array of moving elements where some are moving in a coherent direction (termed signal) and the rest are moving in a random direction (termed noise). The ability of the visual system to detect the motion direction of the signal elements in the presence of noise is thought to reflect the integrative and segregative properties of neurons in the dorsal pathway and, in particular, area MT/MST (Newsome & Pare, 1988). Our ability to detect global motion is limited by two stages of processing (Morrone, Burr, & Vaina, 1995): the initial stage of local motion transduction, thought to be in striate cortex, and a later stage of global integration of these local motions, thought to be in extrastriate cortex. When contrast effects (thought to be at stage 1) are factored out, there is little or no eccentricity or velocity dependence, suggesting that the efficiency of the stage 2 integration does not change with velocity or eccentricity (Hess & Aaen-Stockdale, 2008). 
Another extrastriate region is thought to be involved in a complementary analysis of spatial form (Ungerleider & Mishkin, 1982; Braddick et al., 2000), however, much less is known of the global properties of form analysis in a way that would allow comparison between the basic principles underlying processing in these different extrastriate regions. Some previous approaches have used spatially broadband stimuli such as dots (Wilson, Wilkinson, & Asaad, 1997) and lines (Braddick, O'Brien, Wattam-Bell, Atkinson, & Turner, 2000; Jones, Anderson, & Murphy, 2003), and it is not clear to what extent the results obtained are influenced by local as opposed to global processing. 
Certainly, the use of dots in glass patterns have provided evidence for a global processing stage that some (Wilson et al., 1997) have argued preferentially feeds into global detectors interested in rotational shapes (also see Dakin & Bex, 2002). The standard model of global form processing involves two stages: a first stage of orientation filtering of dot pairs and a second stage of integration of locally derived orientations (Dakin & Bex, 2002; Wilson & Wilkinson, 1996; Wilson et al., 1997). Performance depends on both stages, often leaving unclear the relative participation of each stage of processing. The best attempt at separating out the relative local versus global contributions involved the use of spatially band-pass, though isotropic, elements (Dakin & Bex, 2001b). These authors argue that the global stage is spatially low-pass and sensitive to stimulus “visibility.” However, even this approach leaves one wondering the extent to which sensitivity could have been influenced by local orientation filtering because, even in the global manipulations, the elements were isotropic. 
In another approach, orientationally modulated 1-D noise stimuli have been used to provide evidence of a global orientation processing stage (Jones et al., 2003). Using a task that required simple orientation discrimination, Jones et al. showed that global performance improved with stimulus area. The improvement was well-described by a simple computational model of global integration of local orientation. One potential problem with this approach is that, becuase the local orientational bandwidth covaries with coherence, it is not clear to what extent first-stage orientation processing contributed to the task. 
Ideally one would use equivisible spatial frequency and orientationally narrowband stimuli where the local information (orientation and spatial frequency) is made explicit (i.e., matched to the properties of cellular receptive fields in V1) so that stage 1 operations are less likely to provide the major limit to sensitivity, allowing limitations imposed by the second stage of processing (i.e., beyond V1) to predominate. Only a few studies have used such an approach and, in most cases, the task involved spatial shape rather than global orientation. Field, Hayes, and Hess (1993) used such a stimulus (i.e., Gabor micropatterns) where the local orientation was made explicit and the task involved the detection of contour segments. Achtman, Hess, and Wang (2003) also used a comparable stimulus where integration of the local orientation represented radial form. Dakin and Watt (1997), and Dakin (2001), using local elements with well-defined orientations, showed that the visual system could derive the global mean and variance, and that there was no areal limit to these computations. 
Both global tasks were form-based, and the results depended to a large extent on the type of form to be detected, though probably not at all on the local filtering stage owing to the unambiguous nature of the elements. In order to make a comparison between motion and form function at a stage before shape or structure-from-motion, we used a more elementary task that involved the detection of the global orientation rather than the spatial form derived from local orientations across space. This is more appropriate for comparing to a dorsal coherence task involving motion direction. We address two key questions. First: Do motion and form global processing have similar dependencies? And second: Can coherence detection be thought of as a purely integrative process? To answer the first question, we investigated a number of key properties for global form for which we have comparable global motion data, for example, the spatial (spatial frequency and orientation) (Dakin, Mareschal, & Bex, 2005; Hess & Aaen-Stockdale, 2008), contrast (Hess & Aaen-Stockdale, 2008; Simmers, Ledgeway, Hess, & McGraw, 2003); eccentricity (Hess & Aaen-Stockdale, 2008; Hess & Zaharia, 2010), and areal summation (Dakin, 2001; Downing & Movshon, 1989; Watamaniuk & Sekuler, 1992; Williams & Sekuler, 1984) dependencies. To answer the second question, we modify our task so that there is a clear prediction if performance is solely determined by integrative mechanisms. 
We presented 10 × 10 element arrays of 1-D Gabors (i.e., a 1-D sinusoid windowed by a 2-D Gaussian envelope) at one of two different orientations (separated by ±90° or ±10°), and the degree of coherence was changed by randomizing the orientation of a subset of the elements. Coherence thresholds were found to be insensitive to spatial, contrast, and areal properties but to exhibit an eccentricity dependence. Further, global orientation tasks were found to be well-characterized by a simple segregating filter-based model. 
Methods
Observers
One author (JH) participated in all experimental conditions. Authors RH and PCH participated in a subset of experimental conditions. Most experimental conditions also included at least two additional naive observers with partial overlap of observers across experimental conditions. Details follow. All observers had normal or corrected-to-normal acuity. All observers participated with informed consent in adherence to the declaration of Helsinki. 
Stimuli
The stimulus consisted of an array of 100 elements arranged in a 10 × 10 Cartesian square grid (grid diameter = 10.00° visual angle). Pilot data indicated that the spatial arrangement of elements had no impact on task performance. Coherence thresholds were not significantly different when elements were arranged in a regular grid or when elements were spatially jittered with a lifetime of 111ms (two subjects; JH, t = −1.14, p = 0.29; RFH, t = 0.032, p = 0.98). The lifetime parameter was not crucial to the outcome of these experiments and was provided to make the task more equivalent to its motion counterpart. 
The individual elements were Gabor patches defined by the equation: where c is the contrast, f is spatial frequency of the sinusoidal carrier, θ is the orientation of the carrier, and σ is the space constant of the Gaussian envelope. 
The presentation of individual elements was flickered asynchronously across the array, so that on any single frame only a randomly distributed proportion (approximately 50%) of the 100 elements were displayed. Once an element reached its duration, it was removed for the same duration (10 frames) before reappearing at the same location. As a result, each individual element retained the same position and orientation information throughout the entire 1-second stimulus duration, but was pulsed on and off for 111 ms at a time. The temporal phase of this flickering was randomized across elements such that, on average, approximately half the elements were visible on any given frame. However, because the element duration was short, the effective appearance was that of a full compliment of 100 elements, even though that was never true on any single frame. 
Within the array, the elements were drawn from one of two distributions: a signal distribution or a noise distribution, randomly intermixed across the array. The signal elements were always either all horizontal or all vertical depending on the trial. The noise elements were randomly oriented between 0° and 360°. All elements were presented with the same fixed contrast within any given trial. Stimuli were generated online for each trial. 
Display
Subjects were seated in a dimly lit room and viewed the stimuli on a Compaq monitor (effective dimensions: 38.7 × 29 cm; resolution: 1024 × 768 pixels; frame rate: 90 Hz) driven by a NVIDIA GeForce 8600M GT OpenGL Engine with 256 MB of video RAM housed in an Apple MacBook Pro 2.4 GHz Intel Core 2 Duo computer. The display was gamma corrected with a mean luminance of 26.9 cd/m2
Procedure
Coherence threshold estimation
Observers were presented with a two-alternative, forced-choice task. On each trial, a fixation mark was presented for 0.5 seconds, after which, the stimulus array was displayed for 1 second. The observer's task was to discriminate the signal element orientation (horizontal or vertical) and to indicate their selection by keyboard press. Auditory feedback was provided after the subject's response: a high-pitched tone for a correct response and a low-pitched tone for an incorrect response. 
A staircase method was used to estimate a coherence threshold. Percent coherence was defined as the proportion of array elements drawn from the signal distribution (having the target orientation) relative to the total number of elements (signal plus noise elements). The percent coherence level was varied across trials based on a 2-down-1-up staircase with variable step-sizes. (Stimulus decreases were initially controlled by a step-size of 50% that was reduced to 12.5% after the first reversal. The stimulus increase step-size was always 25%.) The staircase converges, in this case, at a criterion level of 81.6% correct (Garcia-Perez, 1998; Sankeralli & Mullen, 1996). The initial coherence value was randomly selected in the range 60 ± 10% with a cap at 100% coherence. Sessions ended when the staircase reached 12 reversals, and the threshold was computed from the mean of the last six reversals. 
Contrast threshold estimation
All coherence threshold estimations were made at a fixed stimulus contrast. The contrast used was a multiple of the observer's contrast threshold. To estimate that threshold, observers performed the same two-alternative experimental task but with the signal-to-noise ratio fixed at 60% coherence. A 2-down-1-up staircase varied Michelson contrast across trials with an initial value of 20% ± 10% and capped at 100% contrast. The step-size and staircase completion parameters were the same as used for estimating coherence thresholds. 
Experimental manipulations
Contrast
To determine whether performance scales with contrast, five observers (two authors and three naive observers) were tested at five different contrasts. Contrasts were set to 2.5, 4, 6, 8, and 12 times individual contrast thresholds. Observers completed three contrast threshold runs, and the thresholds were averaged to determine the final contrast threshold. Observers then completed six coherence threshold runs (at each contrast). Coherence thresholds were averaged across the six runs to produce an estimate at each contrast level. 
Eccentricity
To determine whether task performance decreased with the eccentric location of the stimulus, a subset of the subjects from the contrast manipulation (one author and three naive observers) were also tested at three additional eccentricities (10°, 20°, and 30°). Of interest was whether performance would be degraded in the periphery beyond what could be explained by changes in contrast thresholds with eccentricity. Thus, observers were tested at the same five contrast threshold multiples at each eccentricity (with the exception of NW, for whom testing at 30° was maximally tested at 10× threshold rather than 12× threshold, as 12× threshold would have exceeded the maximum displayable contrast limit). Contrast thresholds were based on the average of four runs (with the stimulus array presented twice to the left and twice to the right of fixation). Coherence thresholds were based on the average of six runs (again, half to the right and half to the left of fixation). For the eccentricity manipulation alone, the fixation point remained on the screen for the duration of trial. For the 10° manipulation, the fixation point was centrally presented with the array laterally shifted on the screen. Due to screen diameter limits, for the 20° and 30° manipulations, the stimulus was centrally presented on the screen with an eccentric fixation point presented to the edge of the monitor. 
Spatial frequency
Three observers (one author and two naive observers) completed the spatial frequency manipulation. Of interest was whether task performance would scale with spatial frequency beyond limitations in contrast thresholds. Performance across spatial frequencies was tested in two ways: first, by varying the spatial frequency content of the elements, and second, by varying viewing distance for fixed element parameters to scale the retinal spatial frequency while holding the number of visible cycles constant. 
Orientation bandwidth
To manipulate the orientation bandwidth, the Gabor elements were replaced by filtered noise elements. These elements were composed of white noise (with a randomized noise seed across trials), filtered to constrain both the spatial frequency and orientation content of the stimulus, then enveloped by a Gaussian window (stimulus equation detailed in Beaudot & Mullen, 2006). When filtered narrowly along both spatial frequency and orientation dimensions, these noise patches were effectively Gabor patches. 
Orientation bandwidth was manipulated in one of two ways: locally within each element or globally across elements. In the local case, the individual element bandwidths were widened. In the global case, the local orientation bandwidth was narrow and fixed while the peak orientation was jittered across elements to increase the orientation variance across the array. 
These manipulations were performed in the context of a coarse orientation judgment (horizontal vs. vertical) and in the context of a fine judgment (±10° off vertical). In all cases, element orientations were drawn from a normal distribution centered on the target orientation. For coarse judgments, bandwidth was varied by using an orientation standard deviation of 0°, 20°, 40°, and 60°, and for fine judgments, bandwidth was varied using an orientation standard deviation of 0°, 5°, 10°, and 20°. When manipulating orientation globally across the array, the individual element bandwidths were fixed at an orientation standard deviation of 1° (corresponding to a half-bandwidth at half-height of 1.18). 
Aperture size
Three observers (one author and two naive observers) were tested at a range of different aperture sizes. Of interest was whether the aperture size was important for task performance (Jones et al., 2003). To keep density constant while varying the number of elements, both the grid diameter and the number of grid elements were varied proportional to the grid area. The resulting apertures were 1.5°, 2°, 2.5°, 5°, 7.5°, and 10° in diameter and contained (correspondingly) 36, 64, 100, 400, 900, and 1,600 elements. This maintained a constant density of 16 elements/deg2. To accommodate this density, we reduced the sizes of the individual elements (across all aperture sizes) by lowering the standard deviation of the Gaussian window to 0.1° and raising the spatial frequency to 6 cpd. By changing these two parameters together, we maintained the same number of visible cycles relative to other experimental manipulations. 
Underlying computations
Two authors (JH and PCH) participated in this experiment. To test the extent to which orientation coherence thresholds involve a purely integrative process as opposed to a combination of signal integration and segregation, the orientation bandwidth task was modified slightly; a biased orientation was introduced into the presented noise elements by constraining the variance of the orientation noise. The noise elements, rather than varying across the full range of 360°, varied with a standard deviation of 20°, 40°, or 60°. These ranges were centered about a mean orientation opposite to that of the signal (−10° when the signal was 10° or vice versa). Signal elements were presented with a standard deviation of 0°, 10°, and 20° of orientation bandwidth. This task makes the following predictions: If performance is based on a purely integrative process, coherence thresholds should be invariant with the standard deviation of the noise because the distance between the noise and signal means is invariant. On the other hand, a segregative process would predict a performance improvement as the standard deviation of the noise increases due to the increased dispersion of the competing noise signal (see Appendix A for a more detailed description of these models). 
Results
Key properties
Contrast
We measured coherence thresholds at a range of contrasts. Separately for each participant, we first determined the contrast threshold required to perform the task when coherence was fixed at 60% signal. Participants required a mean of 1.8% contrast (SE = 0.21) to distinguish horizontal from vertical gratings at 60% coherence. We then tested coherence thresholds at various contrast multiples above this contrast threshold (2.5, 4, 6, 8, and 12 times individual contrast thresholds). 
On average, across all contrast levels tested, participants required 12.1% orientation coherence to distinguish horizontal from vertical orientations. This coherence threshold varied only minimally with contrast, ranging from 10.9% to 14.3% from high to low contrast. This is depicted in Figure 3A where coherence thresholds are plotted against the contrast of the stimulus in multiples above the stimulus' contrast threshold. Individual results are shown in grey lines and symbols, and the averaged result in black line and symbols. There is an indication that, only when stimuli are close to their absolute threshold, performance is marginally reduced. 
Figure 1
 
Stimulus examples. (A) Default global orientation task. This example depicts the situation where the signal is made up of vertically oriented elements while the noise elements are randomly oriented. (B) Row 1: Spatial frequency increasing from left to right; Row 2: Global orientation bandwidth increasing from left to right; Row 3: Local orientation bandwidth increasing from left to right. All examples depict vertical signal trials. Note: For figure clarity, these stimulus examples depict 5 × 5 stationary grids. The actual stimuli were 10 × 10 grids with limited lifetime elements.
Figure 1
 
Stimulus examples. (A) Default global orientation task. This example depicts the situation where the signal is made up of vertically oriented elements while the noise elements are randomly oriented. (B) Row 1: Spatial frequency increasing from left to right; Row 2: Global orientation bandwidth increasing from left to right; Row 3: Local orientation bandwidth increasing from left to right. All examples depict vertical signal trials. Note: For figure clarity, these stimulus examples depict 5 × 5 stationary grids. The actual stimuli were 10 × 10 grids with limited lifetime elements.
Figure 2
 
Aperture size manipulation. Depicted from left to right are examples of decreasing aperture sizes. The total number of elements was varied to hold density constant.
Figure 2
 
Aperture size manipulation. Depicted from left to right are examples of decreasing aperture sizes. The total number of elements was varied to hold density constant.
Figure 3
 
Coherence thresholds for individual subjects (grey lines), and across-subject means (thick black lines). Error bars represent ±1 standard error of the mean. (A) Coherence threshold as a function of contrast. Contrast is presented in terms of fixed multiples above each subject's contrast threshold. (B) Coherence threshold as a function of aperture size. Element size and density were held constant while varying the number of presented elements to vary the overall stimulus aperture (grid diameter). (C) Coherence threshold as a function of spatial frequency. The spatial frequency of the carrier was varied within a fixed envelope, such that cycles per object varied with spatial frequency. (D) Coherence threshold as a function of spatial frequency with a fixed number of cycles per object. This was achieved by varying viewing distance to control the retinal spatial frequency of the elements.
Figure 3
 
Coherence thresholds for individual subjects (grey lines), and across-subject means (thick black lines). Error bars represent ±1 standard error of the mean. (A) Coherence threshold as a function of contrast. Contrast is presented in terms of fixed multiples above each subject's contrast threshold. (B) Coherence threshold as a function of aperture size. Element size and density were held constant while varying the number of presented elements to vary the overall stimulus aperture (grid diameter). (C) Coherence threshold as a function of spatial frequency. The spatial frequency of the carrier was varied within a fixed envelope, such that cycles per object varied with spatial frequency. (D) Coherence threshold as a function of spatial frequency with a fixed number of cycles per object. This was achieved by varying viewing distance to control the retinal spatial frequency of the elements.
Aperture size
Figure 3B shows the effect of the aperture size manipulation (Figure 2). Observers required an average of 11.4% coherence to distinguish horizontal from vertical orientations. A repeated-measures ANOVA comparing the six aperture sizes revealed that coherence thresholds were not significantly affected by the number of presented elements (F[5, 10] = 0.344, p = 0.88). With the largest viewing aperture (20°, corresponding to 1,600 elements), participants required 11.4% coherence. At the smallest aperture (3°, corresponding to 36 elements), participants required just as little coherence (11.6%). 
Spatial frequency
Figure 3C and 3D show the effect of stimulus spatial frequency. A repeated-measures ANOVA comparing the four spatial frequency levels indicated a (nonsignificant) trend for subjects to require more coherence when performing the task with low (1 cpd) and high (10 cpd) spatial frequency Gabors, relative to midrange (3–6 cpd) spatial frequency Gabors (F[3, 6] = 4.48, p = 0.056). 
To consider whether the trend toward a spatial frequency dependence for coherence thresholds was due to changes in the number of visible cycles, we varied spatial frequency using Gabors with a fixed number of cycles (Figure 3D). The Gabor was fixed such that, at a viewing distance of 60 cm, it had a spatial frequency of 3 cpd. Then viewing distance was varied to alter the retinal spatial frequency while not varying the displayed stimulus, thereby holding the number of cycles constant. Under these conditions, spatial frequency had no effect on task performance (F[3, 6] = 0.17, p = 0.91). 
Eccentricity
In considering the effects of eccentricity on coherence thresholds, we also measured contrast threshold for the stimulus at 60% coherence for all eccentrically located stimuli. Coherence thresholds were lowest at fovea with subjects requiring 12.4% orientation coherence to distinguish horizontal from vertical orientations. Coherence thresholds rose progressively as eccentricity increased with subjects requiring 23.1% coherence for displays centered 30° from the fovea. Because threshold variability also increased with eccentricity, statistical analyses were performed on log-transformed data. A 4 (eccentricity) × 5 (contrast) repeated-measures ANOVA revealed no significant effect of contrast (F[4, 8] = 1.85, p = 0.21), nor a significant interaction between contrast and eccentricity (F[12, 24] = 1.21, p = 0.33), indicating that coherence thresholds were not affected by contrast (Figure 4A) at any eccentricity. There was, however, a main effect of eccentricity (F[2, 6] = 6.50, p = 0.03). A second one-way repeated ANOVA collapsed across contrast, indicating that this effect of eccentricity was well-described by a linear trend (F[1, 2] = 13.509, p = 0.067) with mean thresholds increasing systematically across eccentricities: 12.4, 15.2, 17.1, and 23.0 for 0°, 10°, 20°, and 30°, respectively (Figure 4B). 
Figure 4
 
Mean coherence thresholds as a function of (A) contrast at multiple eccentricities and (B) of eccentricity at multiple contrasts. Data represent subject means, with error bars depicting ±1 standard error of the mean. The heavy solid line in (B) represents the mean, collapsed across contrasts.
Figure 4
 
Mean coherence thresholds as a function of (A) contrast at multiple eccentricities and (B) of eccentricity at multiple contrasts. Data represent subject means, with error bars depicting ±1 standard error of the mean. The heavy solid line in (B) represents the mean, collapsed across contrasts.
Orientation bandwidth
We used two orientation bandwidth manipulations; one involved changing the bandwidth of individual elements (a local manipulation) whereas the other involved jittering the peak orientation of individual narrowband elements (a global manipulation). Each was defined by a Gaussian function so that their effects could be compared for the coarse (horizontal/vertical) as well as the fine (±10°) coherence tasks. The results are shown in Figure 5 where threshold performance in % coherence is plotted against the standard deviation of the Gaussian defining either the local or global orientation manipulation. 
Figure 5
 
Mean coherence thresholds for a coarse (H/V) (A) versus fine (±10°) (B) global orientation task as a function of either broadening the bandwidth of individual array elements (local orientation bandwidth manipulation: filled squares) or jittering the peak orientation of narrowband array elements (the global orientation bandwidth manipulation: grey circles). The global orientation manipulation is more effective for disrupting performance especially for the fine orientation task (±10°). The dashed lines are segregation model predictions (see Appendix A and Figure A1). Error bars represent ±1 standard error of the mean.
Figure 5
 
Mean coherence thresholds for a coarse (H/V) (A) versus fine (±10°) (B) global orientation task as a function of either broadening the bandwidth of individual array elements (local orientation bandwidth manipulation: filled squares) or jittering the peak orientation of narrowband array elements (the global orientation bandwidth manipulation: grey circles). The global orientation manipulation is more effective for disrupting performance especially for the fine orientation task (±10°). The dashed lines are segregation model predictions (see Appendix A and Figure A1). Error bars represent ±1 standard error of the mean.
Because the coarse (horizontal/vertical) and the fine (±10°) tasks were measured on different scales (i.e., different x-axis ranges), we performed two separate repeated-measured ANOVAs to analyze these data sets. For each case, the analysis was a 2 (local vs. global) × 4 (orientation bandwidth) repeated-measures ANOVA. 
Coherence thresholds rose systematically with increased orientation bandwidth (Figure 5) for both the coarse task (Main Effect of Orient BW: F[3, 6] = 36.73, p < 0.001) and the fine task (Main Effect of Orient BW: F[3, 6] = 14.75, p = 0.004). 
When orientation judgments were coarse (horizontal/vertical), subjects were overall significantly more impaired at orientation judgments when orientation bandwidth was manipulated globally than locally (Main Effect of Global vs. Local: F[1, 2] = 89.85, p = 0.011), and there was no significant interaction between the type of manipulation (local vs. global) and the orientation bandwidth (Interaction: F[3,6] = 2.643, p = 0.144). 
For finer orientation judgments (±10°), orientation judgments were marginally (but nonsignificantly) more impaired when orientation bandwidth was manipulated globally across elements than when bandwidth was manipulated locally within elements (Main Effect of Global vs. Local: F[1, 2] = 12.76, p = 0.07). Moreover, when bandwidth was manipulated globally, the rate at which performance was impaired was significantly greater than when bandwidth was manipulated locally (Interaction: F[3, 6] = 12.488, p = 0.005). In other words, when the orientation bandwidth is narrow, performance is similarly affected by local and global manipulations of bandwidth. However, as orientation bandwidth is widened, the global manipulation has a much more pronounced effect on performance than the local manipulation. This is in stark contrast to the coarse task, where the local and global manipulations had very similar effects across the entire orientation bandwidth range. Possible reasons for why the fine task depends more on global than local bandwidth are discussed later. 
Underlying computations
The second question relates to the extent to which orientation coherence thresholds involve a purely integrative process as opposed to a combination of signal integration and signal/noise segregation. Because the noise distribution has an average orientation of zero, one plausible way that subjects could judge the orientation of the stimulus would be to simply average the orientation of all elements together (signal plus noise). This is what we mean by a purely integrative model. 
We created two variations on this integrative model. The first of these integrative models, like that envisaged for the equivalent noise global orientation task (Dakin, 2001), uses all the information available in the noise + signal presentation of a global coherence task to compute a vector average of the element orientations. The second of these models calculates the maximum-likelihood estimate of the mean of a random subsample of the noise and signal elements (see Appendix A for more details about both integration models). 
On the other hand, subjects might perform the task by first segregating the signal from the noise then determining the average orientation of the signal. This type of segregation model would restrict such a computation to only noise + signal orientations that were relevant to the two possible signal orientations. In other words, in the case of the horizontal/vertical task, this model would restrict the computation to those signal and noise elements whose orientations were close to either 0° or 90°. We implemented this through filter-based segregation (i.e., filtering at the two possible signal orientations) followed by a comparison stage along the lines suggested by Jones et al. (2003). The details of this filtering model are presented in Appendix A
To distinguish between these two possibilities (pure integration vs. segregation with integration), we created a variation on our ±10° coherence discrimination task where, instead of using fully random noise (i.e., noise drawn from a distribution with a full range of 360° orientation), we measured how performance changes when the bandwidth of the noise is restricted to a much smaller range of orientations (bw = 0°, 10°, 20°, and 30°). Performance as a function of noise bandwidth (i.e., noise orientation jitter) was examined separately for each of three signal orientation bandwidths (bw = 0°, 10°, and 20°). Figure 6 plots coherence thresholds against the global orientation bandwidth of the noise for three signal orientation bandwidths (data shown as different symbols). Critically, by severely restricting the range of noise orientations, the noise itself becomes oriented and therefore, effectively, competes with the signal. Under these conditions, a purely integrative model makes very different predictions than a model involving a segregation stage. 
Figure 6
 
The effect of varying the noise global orientation bandwidth for a ±10° coherence detection task for two subjects. Results (symbols) are obtained at three signal orientation bandwidths (sd of 0°, 10°, and 20°). The dashed lines (different shades representing different signal orientation bandwidths) represent (A) the predictions of a purely integrative model that calculates the orientation vector average of all elements, (B) a purely integrative model that determines the average orientation of a sample of elements using a maximum-likelihood estimation, and (C) the predictions of a filter-based segregation model (see Appendix A for details regarding all models).
Figure 6
 
The effect of varying the noise global orientation bandwidth for a ±10° coherence detection task for two subjects. Results (symbols) are obtained at three signal orientation bandwidths (sd of 0°, 10°, and 20°). The dashed lines (different shades representing different signal orientation bandwidths) represent (A) the predictions of a purely integrative model that calculates the orientation vector average of all elements, (B) a purely integrative model that determines the average orientation of a sample of elements using a maximum-likelihood estimation, and (C) the predictions of a filter-based segregation model (see Appendix A for details regarding all models).
A purely integrative model that averages all the orientation information available in the stimulus will base coherence thresholds entirely on the mean orientation and will not be sensitive to the variance of orientations in either the signal or noise distributions. Coherence thresholds, in this case, are expected to be just above 50% coherence because it is only possible to identify which direction represents signal once the signal information exceeds 50%. Increasing the bandwidth of either the signal or the noise will not affect coherence thresholds because changing the bandwidth only increases the variance of orientations represented while leaving the mean signal direction unchanged. Figure 6 plots the model results of both versions of the integration model: complete integration, whereby global orientation is estimated via vector averaging across all elements (Figure 6A), and integration, where the global orientation is estimated via maximum-likelihood across a random subset of elements (Figure 6B). 
By contrast, a model involving both segregation and integration (lines in Figure 6C) uses only a subset of the orientation information in the stimulus (i.e., that most relevant to the signal). As a result, increasing the variance of the noise (at a fixed signal variance) improves performance because, at broad bandwidths, the noise signal-strength is more broadly distributed (i.e., the noise itself is less oriented), lessening the extent to which it can act as a competing signal. 
The performance of human observers (data symbols in Figure 6) is seen to improve as the bandwidth of the noise broadens; this is true for all signal bandwidths. In Figure 6A and 6B, the predictions for the purely integrative models (dashed lines) do not conform to the data (symbols) at any of the three different signal orientation bandwidths. However, the model that involves filter-based segregation (dashed lines in Figure 6C) does capture the main features of the data (symbols), namely the dependence on signal and noise orientation bandwidth, suggesting that the predictions of the segregation model are consistent with human performance. 
The better data fits of the segregating than integrating models implies that orientation coherence sensitivity involves more than simply averaging across the entire array. It involves segregation of signal from noise. This segregation could be implemented by the visual system in a variety of ways; our modeling using a simple two-filter approach is but one possible way. 
Having shown the utility of the segregation approach, we revisit the data displayed in Figure 5 to further test this model's suitability. The segregation model captures the main features of the results displayed in Figure 5 where the disruptive effects of broadening the global orientation bandwidth is greater than that for broadening the local orientation bandwidth. 
Discussion
The first question addressed in this study is the extent to which the key dependencies of global orientation judgments are similar to those for global motion direction judgments. Answering this question will tell us the extent to which processing within the dorsal and ventral pathways proceeds along common lines. A number of important similarities were observed in terms of the parameters of contrast, spatial frequency, and aperture size. For both dorsal and ventral global tasks, performance depended on the “visibility” of the stimulus; in other words, stimulus suprathreshold contrast (Dakin & Bex, 2001a; Hess & Aaen-Stockdale, 2008; Hess & Zaharia, 2010). Both tasks display scale invariance (Dakin & Bex, 2001a; Hess & Aaen-Stockdale, 2008) and a lack of dependence on aperture size (Dakin, 2001; Dakin et al., 2005; Downing & Movshon, 1989; Watamaniuk & Sekuler, 1992; although see Jones et al., 2003). This suggests that information processing in dorsal and ventral pathways is either collapsed across spatial frequency or processed similarly across scale and that it is only contrast relative to threshold that is important. Also, the underlying sampling occurs along very flexible lines for both pathways with no obvious areal dependence for central vision, consistent with an informational limit (Dakin, 2001). 
One important difference between the global processing of orientation and motion involves eccentricity. Once the visibility effects have been factored out, global motion processing does not exhibit an eccentricity dependency (Hess & Aaen-Stockdale, 2008) whereas global orientation processing does (Figure 4). Admittedly, this dependency is not strong and is only significant at eccentricities of 30° and greater. Although this represents an important difference between the processing of global orientation and motion, the notion that dorsal and ventral processing are similar, at least at the level where motion direction and orientation are concerned, is attractive. 
Unsurprisingly, orientation bandwidth is an important factor for orientation coherence judgments just as directional bandwidth is for motion coherence judgments (Dakin et al., 2005; Watamaniuk & Sekuler, 1992). The finer the judgments (i.e., ±10° relative to horizontal/vertical), the greater the dependence on the global as compared with the local orientational bandwidth. The coarse versus fine task dependence would be expected on the basis of a signal-to-noise consideration in that, for comparable signal detection, filters of a constant bandwidth will be more overlapping for the ±10° task. The local versus global bandwidth dependence can also be understood in terms of signal-to-noise considerations. When the stimulus bandwidth is broadened locally, all the signal elements contribute signal. This signal is derived both from their peak orientation as well as from the extra orientation components. Unlike the local manipulation, for the global manipulation, all individual signal elements remain narrow in bandwidth. Narrowband signal elements with orientations distant from the filter's peak sensitivity will constitute noise (rather than acting as additional signal), hence more dramatically reducing the signal/noise ratio. These manipulations, in themselves, tell us nothing about the bandwidth of the underlying detectors. For motion coherence, a comparable global bandwidth manipulation was done by Dakin et al., (2005), and a comparable local directional bandwidth manipulation was done by Watamaniuk and Sekuler (1992). The effects of these two manipulations for global motion detection are similar to that reported here for global orientation detection; a stronger dependence is seen for the global compared with the local directional bandwidth manipulation. 
The second question involves the extent to which form coherence thresholds can be explained simply in terms of integrative and segregative processes. A purely integrative model is really a straw man in the context of a neural system characterized by band-pass orientational filters. To institute a purely integrative model, one would need to entertain the possibility that the task was subserved by filters with very broad orientational bandwidths. The filter-based segregative model proposed by Jones et al. (2003) involving the comparison of the outputs of two optimally overlapping orientational band-pass filters centered on the signal orientations is a more realistic proposal. This model (see Appendix A) was found to provide a good description of the main trends of the data when the local and global orientational bandwidths were varied for both the coarse and fine coherence tasks (Figure 5). With the two parameters of the model (orientation bandwidth and internal noise) fixed, the same model provided a good description of the main trends in the data when noise orientational bandwidth was varied for various signal orientational bandwidths for the fine coherence task (Figure 6). 
Comparison with equivalent noise paradigms
Global form sensitivity has also been measured with the equivalent noise approach where subjects are asked to compute the average orientation of an array of Gabors, the orientation of each being samples from a Gaussian distribution of variable bandwidth (Dakin, 2001). The “noise” in equivalent noise paradigms, such as Dakin (2001), is fundamentally different from that in a coherent form or motion task. In coherent form and motion tasks, the noise consists of random orientations (or directions), while in the equivalent noise paradigm, noise is added by increasing the orientation or direction variance around the mean. Critically, then, in the equivalent noise paradigm, all the elements contain information relevant to solving the task and would be used by an ideal observer to do so. However, in the coherent form or motion task, only the signal elements contain information relevant to solving the task; the noise elements do not. This distinction is highlighted in our previous studies of the clinical condition, amblyopia. Amblyopes can perform normally in either a form (Mansouri et al., 2004; Mansouri et al., 2005) or motion (Hess et al., 2006) equivalent noise task, but exhibit anomalies in either form (Simmers et al., 2005) or motion (Aaen-Stockdale and Hess, 2008; Aaen-Stockdale et al., 2007; Simmers et al., 2003; Simmers et al., 2006) coherence tasks. Thus, we believe that fundamentally different operations underlie equivalent noise and coherence tasks; the former involving purely integrative processes and the latter an additional segregative process. When additional noise irrelevant to solving the task is introduced into the equivalent noise paradigm, the visual system does not blindly integrate all the information provided by the stimulus (i.e., signal + noise) but maintains relatively high sensitivity by segregating signal from noise (Mansouri and Hess, 2006). 
Acknowledgments
This study was funded by NSERC grant 228103 to RFH. The experiments were generated and run using the software Psykinematix (Beaudot, 2009; Beaudot & Mullen, 2006). 
Commercial relationships: none. 
Corresponding author: Robert Hess. 
Address: McGill Vision Research Unit, Department of Ophthalmology, McGill University, Royal Victoria Hospital, Montreal, Quebec, Canada. 
References
Aaen-Stockdale C Hess R. F . (2008). The amblyopic deficit for global motion is spatial scale invariant. Vision Research, 48, 1965–1971. [CrossRef] [PubMed]
Aaen-Stockdale C Ledgeway T Hess R. F . (2007). Second-order optic flow deficits in amblyopia. Investigative Ophthalmology & Visual Science, 48(12), 5532–5538, http://www.iovs.org/content/48/12/5532. [PubMed] [Article]. [CrossRef] [PubMed]
Achtman R. L. Hess R. F. Wang Y. Z. (2003). Sensitivity for global shape detection. Journal of Vision, 3(10):4, 616–624, http://www.journalofvision.org/content/3/10/4, doi:10.1167/3.10.4. [PubMed] [Article]. [CrossRef]
Beaudot W. H. A . (2009). Psykinematix: A new psychophysical tool for investigating visual impairment due to neural dysfunction. Journal of the Vision Society of Japan, 21(1), 19–32.
Beaudot W. H. A. Mullen K. T . (2006). Orientation discrimination in human vision: Psychophysics and modelling. Vision Research, 46(1), 26–46. [CrossRef] [PubMed]
Braddick O. J. O'Brien J. M. Wattam-Bell J Atkinson J Turner R . (2000). Form and motion coherence activate independent but not dorsal/ventral segregated, networks in human brain. Current Biology, 10, 731–734. [CrossRef] [PubMed]
Dakin S. C. Bex P. J. (2001). Local and global visual grouping: Tuning for spatial frequency and contrast. Journal of Vision, 1(2):4, 99–111, http://www.journalofvision.org/content/1/2/4, doi:10.1167/1.2.4. [PubMed] [Article]. [CrossRef]
Dakin S. C . (2001). An information limit on the spatial integration of local orientation signals. Journal of the Optical Society of America A, 18, 1016–1026. [CrossRef]
Dakin S. C. Bex P. J . (2002). Summation of concentric orientation structure: Seeing the Glass or the window? Vision Research, 42(16), 2013–2020. [CrossRef] [PubMed]
Dakin S. C. Watt R. J . (1997). The computation of orientation statistics from visual texture. Vision Research, 37, 3181–3192. [CrossRef] [PubMed]
Dakin S. C. Mareschal I Bex P. J . (2005). Local and global limitations on direction integration assessed using equivalent noise analysis. Vision Research, 45(24), 3027–3049. [CrossRef] [PubMed]
Downing C. J. Movshon J. A . (1989). Spatial and temporal summation in the detection of motion in stochastic random dot displays. Investigative Ophthalmology & Visual Science (suppl), 30, p 72.
Field D. J. Hayes A Hess R. F . (1993). Contour integration by the human visual system: Evidence for a local “association field.” Vision Research, 33(2), 173–193. [CrossRef] [PubMed]
Garcia-Perez M. A . (1998). Forced-choice staircases with fixed step sizes: Asymptotic and small-sample properties. Vision Research, 38, 1861–1881. [CrossRef] [PubMed]
Hess R. F. Aaen-Stockdale C. (2008). Global motion processing: The effect of spatial scale and eccentricity. Journal of Vision, 8(4):11, 11–11, http://www.journalofvision.org/content/8/4/11, doi:10.1167/8.4.11. [PubMed] [Article]. [CrossRef] [PubMed]
Hess R. F. Zaharia A. G. (2010). Global motion processing: Invariance with mean luminance. Journal of Vision, 10(13):22, 1–10, http://www.journalofvision.org/content/10/13/22, doi:10.1167/10.13.22. [PubMed] [Article]. [CrossRef] [PubMed]
Hess R. F. Mansouri B Dakin S. C. Allen H . (2006). Integration of local motion is normal in amblyopia. Journal of the Optical Society of America A, 23, 1–8. [CrossRef]
Jones D. G. Anderson N. D. Murphy K. M . (2003). Orientation discrimination in visual noise using global and local stimuli. Vision Research, 43, 1223–1233. [CrossRef] [PubMed]
Mansouri B Allen H. A. Hess R. E. Dakin S. C. Ehrt O . (2004). Integration of orientation information in amblyopia. Vision Research, 44, 2955–2969. [CrossRef] [PubMed]
Mansouri B Allen H. A. Hess R. F . (2005). Detection, discrimination and integration of second-order orientation information in strabismic and anisometropic amblyopia. Vision Research, 45, 2449–2460. [CrossRef] [PubMed]
Mansouri B Hess R. F . (2006). The global processing deficit in amblyopia involves noise segregation. Vision Research, 46, 4104–4117. [CrossRef] [PubMed]
Morgan M. J. Ward R . (1980). Conditions for motion flow in dynamic visual noise. Vision Research, 20(5), 431–435. [CrossRef] [PubMed]
Morrone M. C. Burr D. C. Vaina L. M . (1995). Two stages of visual motion processing for radial and circular motion. Nature, 376, 507–509. [CrossRef] [PubMed]
Newsome W. T. Pare E. B . (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed]
Prins N Kingdom F. A. A . (2009). Palamedes: Matlab routines for analyzing psychophysical data. Internet site: http://www.palamedestoolbox.org.
Sankeralli M. J. Mullen K. T . (1996). Estimation of the L-, M-, and S-cone weights of the postreceptoral detection mechanisms. Journal of the Optical Society of America A, 13(5), 906–915. [CrossRef]
Simmers A. J. Ledgeway T Hess R. F . (2005). The influences of visibility and anomalous integration processes on the perception of global spatial form versus motion in human amblyopia. Vision Research, 45, 449–460. [CrossRef] [PubMed]
Simmers A. J. Ledgeway T Hess R. F. McGraw P. V . (2003). Deficits to global motion processing in human amblyopia. Vision Research, 43, 729–738. [CrossRef] [PubMed]
Simmers A. J. Ledgeway T Mansouri B Hutchinson C. V. Hess R. F . (2006). The extent of the dorsal extra-striate deficit in amblyopia. Vision Research, 46, 2571–2580. [CrossRef] [PubMed]
Ungerleider L Mishkin M . (1982). Two cortical visual systems. In Ingle D. J. Goodale M. A. Mansfield R. J. W . (Eds.) , Analysis of visual behaviour (pp. 549–586). Cambridge, MA: MIT Press.
Watamaniuk S. N. J. Sekuler R . (1992). Temporal and spatial integration in dynamic random dot stimuli. Vision Research, 32(12), 2341–2347. [CrossRef] [PubMed]
Williams D. W. Brannan J. R . (1994). Spatial integration of local motion signals. In Smith A.T.S.R. .(Ed.), Visual detection of motion (pp. 291–303). London: Hardcourt, Brace & Company.
Williams D. W. Sekuler R . (1984). Coherent global motion percepts from stochastic local motions. Vision Research, 24, 55–62. [CrossRef] [PubMed]
Wilson H. R. Wilkinson F . (1996). Non-Fourier mechanisms in human form vision: Psychophysical data and theory. Investigative Ophthalmology & Visual Science, 37(3), S955.
Wilson H. R. Wilkinson F Asaad W . (1997). Concentric orientation summation in human form vision. Vision Research, 37(17), 2325–2330. [CrossRef] [PubMed]
Appendix A
Model descriptions
Integrative model: unweighted vector averaging operator
This model uses all the information available in the element array (both signal and noise elements) to compute an average orientation. We averaged the orientation of all micro-patterns by the following vector-averaging equation: where  
The response (decision rule) is determined by the averaged orientation.  
To simulate human performance, we assumed visual performance is limited by internal noise. We applied late-stage, Gaussian-distributed noise with standard deviation of σ (free-parameter) to account for all the possible noise sources. 
We ran a series of trial-by-trial simulations to generate modeled thresholds with the goal of finding the best-fitting parameter (late-stage Gaussian noise) that would minimize the least-squared difference between the actual subject thresholds and the modeled thresholds. 
Generating these modeled thresholds required a couple of steps. In Step 1, we modeled thresholds individually for each stimulus level (i.e., for each level of noise and signal jitter; see Figure 6). To do so, we ran trial-by-trial simulations. On each simulated trial, the micropatterns were randomly generated and the mean orientation was calculated (Equation A1). The late-stage, Gaussian-distributed noise (with standard deviation of σ) was added to this mean orientation. The resulting mean orientations were used to generate a binary orientation decision. Because the signal was always ±10°, we used a simplified decision rule that determined whether the average orientation was to the left or right of vertical (implemented as ±0°: Equation A2). This process was repeated for a range of coherence levels (0:10:100% coherence), and the decisions across 100 trials for each coherence level was used to generate a psychometric function (% decision vs. coherence). We used the Palamedes toolbox (Prins & Kingdom, 2009) to fit the psychometric function with a cumulative Gaussian function to estimate the coherence threshold. 
Step 2: Repeating Step 1 for each level of orientation noise jitter, the obtained thresholds could be combined to create a new function: Percent coherence versus noise jitter (see model lines in Figure 6). These modeled functions were compared with the actual data functions, and the least squared difference was calculated. By minimizing the least-squared distance (using the fminsearch function in Matlab) between the log-transform behavioral thresholds and the model thresholds, we determined the best-fitting parameter (σ late Gaussian noise). 
Finally, we iterated this entire process 100 times to define the minimum least-squared distance as our best-fit, and the value of σ was used as our internal noise level. This process was completed separately for each modeled line in Figure 6
Figure A1
 
Schematic diagram of orientation summation model (along the lines suggested by Jones et al., 2003). The partial stimulus pattern is depicted at the top. Each element of the oriented Gabor was convolved with vertical and horizontally oriented filters and the filter response for the Gabors with different orientations is plotted in the second row. Within each orientation channel, the filter responses are summed together after which Gaussian distributed noise is added (late noise). The model response is determined by the orientation of the filter that produces the larger response.
Figure A1
 
Schematic diagram of orientation summation model (along the lines suggested by Jones et al., 2003). The partial stimulus pattern is depicted at the top. Each element of the oriented Gabor was convolved with vertical and horizontally oriented filters and the filter response for the Gabors with different orientations is plotted in the second row. Within each orientation channel, the filter responses are summed together after which Gaussian distributed noise is added (late noise). The model response is determined by the orientation of the filter that produces the larger response.
Integrative model: maximum likelihood operator
This model adopts the maximum likelihood operator to estimate the mean global orientation based on a random subsample of elements (arbitrarily fixed at 70% of the elements). 
In this case, the estimate of the global (or mean) orientation was determined by maximizing the likelihood function to a template whose shape matched a normal distribution (selected because our signal followed a normal distribution). The function is given as follows: where mi is the orientation of the micropatterns. The log-likelihood function is given by where lnL is maximized to find the θ. 
As before, we ran trial-by-trial simulations (N = 100) to model coherence thresholds across noise jitter levels and then repeated the process separately for each signal jitter level. The procedure for these trial-by-trial simulations was exactly the same as described previously, with the exception of using Equations A3 and A4 to find the global orientation, rather than using Equation A1. Using the same approach as before, we minimized the least-squared distance between the log-transform behavioral thresholds and model thresholds by finding the best fitting parameter (σ noise). 
Segregation model
The segregation model was a two-stage filtering model. The main architecture of this model was a band of linear filters that operated on the input images. The sensitivity profile of these linear filters was a wavelet defined by a Gabor function (to mimic the receptive field properties of neurons in primary visual cortex). We also assumed the sensitivity functions are identical at all positions in the visual field and that the bandwidth and spatial frequency of the filters is matched to the signal micropatterns. As a result, when modeling the data in Figure 6, the two filters were set to ±10°. When modeling the data in Figure 5, we separately modeled the coarse (horizontal/vertical) and fine (±10°) tasks, and assumed two oriented filters: tuned to 0° and 90° for the coarse task and to ±10° for the fine task. 
The output of the linear filter, which has a sensitivity/modulation profile Fi, to the micropattern, Mj, is calculated from the frequency domain by the following equation: where and f0 and σf matches our micropattern. Then the responses for each input element from its amplitude spectrum were summed together to determine its output, Ei,j. At the second stage, the outputs of each linear filter are summed together: and followed by a late-stage, Gaussian-distributed noise that corresponds to the noise or uncertainty in the psychophysical decision stage. This model included two free parameters: the orientation bandwidth of the filter and the standard deviation of late-stage (Gaussian-distributed) noise. 
The response was determined by the orientation of the filter that produced the larger response:  
We ran trial-by-trial simulations (N = 100) following the same procedure as previously described to estimate the model thresholds. We minimized the least-squared distance (using the fminsearch function in Matlab) between the log-transform behavioral thresholds and model thresholds by finding the best-fitting parameters (σ noise and β filter orientation bandwidth). 
In the case of the orientation bandwidth experiment (Figure 5), these parameter fits were determined separately for the fine and coarse task, and for the global and local manipulations (i.e., simulations were run separately for each model line in Figure 5). 
Figure 1
 
Stimulus examples. (A) Default global orientation task. This example depicts the situation where the signal is made up of vertically oriented elements while the noise elements are randomly oriented. (B) Row 1: Spatial frequency increasing from left to right; Row 2: Global orientation bandwidth increasing from left to right; Row 3: Local orientation bandwidth increasing from left to right. All examples depict vertical signal trials. Note: For figure clarity, these stimulus examples depict 5 × 5 stationary grids. The actual stimuli were 10 × 10 grids with limited lifetime elements.
Figure 1
 
Stimulus examples. (A) Default global orientation task. This example depicts the situation where the signal is made up of vertically oriented elements while the noise elements are randomly oriented. (B) Row 1: Spatial frequency increasing from left to right; Row 2: Global orientation bandwidth increasing from left to right; Row 3: Local orientation bandwidth increasing from left to right. All examples depict vertical signal trials. Note: For figure clarity, these stimulus examples depict 5 × 5 stationary grids. The actual stimuli were 10 × 10 grids with limited lifetime elements.
Figure 2
 
Aperture size manipulation. Depicted from left to right are examples of decreasing aperture sizes. The total number of elements was varied to hold density constant.
Figure 2
 
Aperture size manipulation. Depicted from left to right are examples of decreasing aperture sizes. The total number of elements was varied to hold density constant.
Figure 3
 
Coherence thresholds for individual subjects (grey lines), and across-subject means (thick black lines). Error bars represent ±1 standard error of the mean. (A) Coherence threshold as a function of contrast. Contrast is presented in terms of fixed multiples above each subject's contrast threshold. (B) Coherence threshold as a function of aperture size. Element size and density were held constant while varying the number of presented elements to vary the overall stimulus aperture (grid diameter). (C) Coherence threshold as a function of spatial frequency. The spatial frequency of the carrier was varied within a fixed envelope, such that cycles per object varied with spatial frequency. (D) Coherence threshold as a function of spatial frequency with a fixed number of cycles per object. This was achieved by varying viewing distance to control the retinal spatial frequency of the elements.
Figure 3
 
Coherence thresholds for individual subjects (grey lines), and across-subject means (thick black lines). Error bars represent ±1 standard error of the mean. (A) Coherence threshold as a function of contrast. Contrast is presented in terms of fixed multiples above each subject's contrast threshold. (B) Coherence threshold as a function of aperture size. Element size and density were held constant while varying the number of presented elements to vary the overall stimulus aperture (grid diameter). (C) Coherence threshold as a function of spatial frequency. The spatial frequency of the carrier was varied within a fixed envelope, such that cycles per object varied with spatial frequency. (D) Coherence threshold as a function of spatial frequency with a fixed number of cycles per object. This was achieved by varying viewing distance to control the retinal spatial frequency of the elements.
Figure 4
 
Mean coherence thresholds as a function of (A) contrast at multiple eccentricities and (B) of eccentricity at multiple contrasts. Data represent subject means, with error bars depicting ±1 standard error of the mean. The heavy solid line in (B) represents the mean, collapsed across contrasts.
Figure 4
 
Mean coherence thresholds as a function of (A) contrast at multiple eccentricities and (B) of eccentricity at multiple contrasts. Data represent subject means, with error bars depicting ±1 standard error of the mean. The heavy solid line in (B) represents the mean, collapsed across contrasts.
Figure 5
 
Mean coherence thresholds for a coarse (H/V) (A) versus fine (±10°) (B) global orientation task as a function of either broadening the bandwidth of individual array elements (local orientation bandwidth manipulation: filled squares) or jittering the peak orientation of narrowband array elements (the global orientation bandwidth manipulation: grey circles). The global orientation manipulation is more effective for disrupting performance especially for the fine orientation task (±10°). The dashed lines are segregation model predictions (see Appendix A and Figure A1). Error bars represent ±1 standard error of the mean.
Figure 5
 
Mean coherence thresholds for a coarse (H/V) (A) versus fine (±10°) (B) global orientation task as a function of either broadening the bandwidth of individual array elements (local orientation bandwidth manipulation: filled squares) or jittering the peak orientation of narrowband array elements (the global orientation bandwidth manipulation: grey circles). The global orientation manipulation is more effective for disrupting performance especially for the fine orientation task (±10°). The dashed lines are segregation model predictions (see Appendix A and Figure A1). Error bars represent ±1 standard error of the mean.
Figure 6
 
The effect of varying the noise global orientation bandwidth for a ±10° coherence detection task for two subjects. Results (symbols) are obtained at three signal orientation bandwidths (sd of 0°, 10°, and 20°). The dashed lines (different shades representing different signal orientation bandwidths) represent (A) the predictions of a purely integrative model that calculates the orientation vector average of all elements, (B) a purely integrative model that determines the average orientation of a sample of elements using a maximum-likelihood estimation, and (C) the predictions of a filter-based segregation model (see Appendix A for details regarding all models).
Figure 6
 
The effect of varying the noise global orientation bandwidth for a ±10° coherence detection task for two subjects. Results (symbols) are obtained at three signal orientation bandwidths (sd of 0°, 10°, and 20°). The dashed lines (different shades representing different signal orientation bandwidths) represent (A) the predictions of a purely integrative model that calculates the orientation vector average of all elements, (B) a purely integrative model that determines the average orientation of a sample of elements using a maximum-likelihood estimation, and (C) the predictions of a filter-based segregation model (see Appendix A for details regarding all models).
Figure A1
 
Schematic diagram of orientation summation model (along the lines suggested by Jones et al., 2003). The partial stimulus pattern is depicted at the top. Each element of the oriented Gabor was convolved with vertical and horizontally oriented filters and the filter response for the Gabors with different orientations is plotted in the second row. Within each orientation channel, the filter responses are summed together after which Gaussian distributed noise is added (late noise). The model response is determined by the orientation of the filter that produces the larger response.
Figure A1
 
Schematic diagram of orientation summation model (along the lines suggested by Jones et al., 2003). The partial stimulus pattern is depicted at the top. Each element of the oriented Gabor was convolved with vertical and horizontally oriented filters and the filter response for the Gabors with different orientations is plotted in the second row. Within each orientation channel, the filter responses are summed together after which Gaussian distributed noise is added (late noise). The model response is determined by the orientation of the filter that produces the larger response.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×