Free
Research Article  |   December 2007
Feature-based attention modulates the perception of object contours
Author Affiliations
Journal of Vision December 2007, Vol.7, 18. doi:https://doi.org/10.1167/7.14.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Boge Stojanoski, Matthias Niemeier; Feature-based attention modulates the perception of object contours. Journal of Vision 2007;7(14):18. https://doi.org/10.1167/7.14.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Feature-based attention is known to support perception of visual features associated with early and intermediate visual areas. Here we examined the role of feature-based attention in higher levels of object processing. We used a dual-task design to probe perception of poorly attended contour-defined or motion-defined loops while attention was occupied with congruent or incongruent feature detection tasks. Perception of the unattended task was better when concurrently presented with a congruent stimulus. However, this effect was eliminated when detection of the primary task was made easy suggesting that task-demand in object perception is feature specific. Our results provide evidence for the contribution of feature-based attention to object perception.

Introduction
The visual world is complex. If it were random, the brain could not analyze it despite the massively parallel computational power of some thirty cortical areas (Felleman & Van Essen, 1991). Fortunately, natural scenes show patterns in space and time (e.g., Geisler, Perry, Super, & Gallogly, 2001; Simoncelli & Olshausen, 2001), providing cues to predict perceptual events and to guide attention to optimize vision (Navalpakkam & Itti, 2007). 
Best known are cues that carry spatial information and that guide attention to the location at which a stimulus is expected, improving perception within a confined focus of attention (e.g., Brefczynski & DeYoe, 1999; Bundesen, 1990; Posner, Snyder, & Davidson, 1980; Treisman & Gelade, 1980). Other cues convey information about non-spatial feature dimensions (e.g., color, orientation, or motion) or feature values (e.g., red or blue). Critically, attention based on feature cues influences perception throughout the visual field, independent of and even outside the location of the focus of attention. For example, when searching for a friend in a crowd, we may imagine the color of his/her coat to facilitate our search, or when centering our attention on a moving object that may change motion perception of unattended objects (e.g., Bichot, Rossi, & Desimone, 2005; Lu & Itti, 2005; McAdams & Maunsell, 2000; Melcher, Papathomas, & Vidnyanszky, 2005; Morrone, Denti, & Spinelli, 2002; Navalpakkam & Itti, 2007; Saenz, Buracas, & Boynton, 2002, 2003; Treue & Martinez Trujillo, 1999). 
This “feature-based attention” has been studied for early and intermediate visual mechanisms. However, does feature-based attention play a role in processes involved in higher level object perception? It has been demonstrated that these processes are modulated by attention during visual search (e.g., Ben-Av, Sagi, & Braun, 1992; Nothdurft, 1993), dual tasks (Pritchard & Warm, 1983, but see Davis & Driver, 1994; Gurnsey, Humphrey, & Kapitan 1992), and tests of figural aftereffects (e.g., Shulman, 1992; Suzuki, 2001; Yeh, Chen, De Valois, & De Valois, 1996). However, those studies manipulated spatial attention or combinations of spatial and feature-based attention; and likewise, models integrating attentional and recognition processes implement spatial forms of attention (for a review, see Itti & Koch, 2001). 
Support for feature-based effects of attention on high-level processes comes from neurophysiological data suggesting that attentional modulation is stronger for later cortical areas within the visual hierarchy (Maunsell & Cook, 2002). Arguably, the strongest modulation occurs at its highest level, in the inferior temporal (IT) cortex. Chelazzi, Miller, Duncan, and Desimone (1993) presented one preferred and one non-preferred stimulus inside the receptive field of IT neurons. The neurons responded to the preferred stimulus depending on whether the monkey was cued to search for it or not. However, these responses could reflect either feature-based or spatial mechanisms. Similarly, neuroimaging studies in humans have demonstrated effects in higher tier areas consistent with either feature-specific and/or spatial attention (Cant & Goodale, 2007; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1990; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1991; Murray & Wojciulik, 2003; Niemeier, Goltz, Kuchinad, Tweed, & Vilis, 2005). What is more, functional data do not necessarily map directly onto behavior. For example, activity in the fusiform face area varies with attention (O'Craven, Downing, & Kanwisher, 1999; Wojciulik, Kanwisher, & Driver, 1998). But at the same time people need little attention to identify the gender of faces (Reddy, Wilken, & Koch, 2004), and faces are difficult to ignore when presented as distractors (Lavie, Ro, & Russell, 2003). This might indicate that face recognition involves specialized, automatic mechanisms. Indeed, computational models have demonstrated that under certain conditions object perception could be achieved through fast feed-forward processes that do not rely on attention (Riesenhuber & Poggio, 1999). In sum, remaining gaps in the understanding of feature-based attention call for an investigation of the contributions of feature-based attention to object perception. 
As a critical step of object perception, contour integration segments the visual scene into the surfaces of different objects involving several low- and high-level areas (Kourtzi, Tolias, Altmann, Augath, & Logothetis, 2003; Mendola, Dale, Fischl, Liu, & Tootell, 1999). Area V1 may extract contours through excitatory connections between neurons sensitive to the orientation of neighboring segments of smooth contours (Field, Hayes, & Hess, 1993; Zhou, Friedman, & von der Heydt, 2000); and it seems to interact with higher areas via recurrent connections (Super, Spekreijse, & Lamme, 2001) through which acquired object knowledge might be integrated (Peterson & Gibson, 1993; Zemel, Behrmann, Mozer, & Bavelier, 2002). Such cognitive factors are known to be associated with object areas. For instance, the lateral occipital complex (LOC) responds to the images of objects (Malach et al., 1995); and bilateral lesions in the LOC lead to visual object agnosia (James, Culham, Humphrey, Milner, & Goodale, 2003). 
The significance of the LOC in contour integration has been demonstrated in two recent studies. Murray et al. (2002) showed that contour-related activity in the LOC precedes that of earlier areas. Stanley and Rubin (2003) found that the LOC responds to shapes with jagged, incomplete contours. Together these data suggest that recognition of coherent objects and surfaces achieved in the LOC (and perhaps in other high-level object areas) controls contour integration in a top-down manner, gaining maximum weight when vision is sub-optimal. 
Here we tested influences of feature-based attention on these mechanisms of contour integration. We used a dual-task paradigm to probe our participants' perception of loops with jagged contours while their focus of attention was occupied with congruent or incongruent feature detection tasks. To control for the amount of attention allocated to the dual tasks, we embedded the paradigm in a rapid serial visual presentation and adjusted the perceptual load of the primary task. Our results suggest that contour integration is governed by feature-based attention that works in an inhibitory manner across feature dimensions. 
Materials and methods
Participants
The two authors and two paid volunteers participated in three experiments. Three participated in Experiment 1 (AG, BS, and SG), two in Experiment 2 (BS and MN), and two in Experiment 3 (BS and MN). All observers had normal or corrected-to normal vision and gave their informed and written consent before participating. Procedures were approved by the Human Participants Review Sub-Committee of the University of Toronto. 
Stimuli and procedures
Stimuli were presented on a 19-in. monitor (Viewsonic) at refresh rates of 85 Hz ( Experiments 1 and 2) or 100 Hz ( Experiment 3) and an average luminance of 26.8 cd/m 2. At a distance of 60 cm away from the screen, the participant's head was stabilized using a chin rest, and an EyeLink II tracker (SR Research, Mississauga, ON) controlled for accurate eye fixations. Trials containing eye movements were discarded and recycled. We generated our experiments within Matlab (MathWorks) using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) and the EyeLink Toolbox extensions (Cornelissen, Peters, & Palmer, 2002). 
Experimental trials presented a central fixation arrow (0.7° by 0.7°) that pointed to the side to which participants should direct their attention. About 2700 ms later, two concurrent rapid serial visual presentation windows (6.8° by 6.8°) appeared 5.1° to the left and right of the arrow. 
Each of the two streams contained 35 frames. Each frame appeared for 154 ms (90 ms in Experiment 3) and presented 13 luminance-defined gabors (spatial frequency: 2.1 cycles per degree, standard deviation: 0.2°). In most frames, the gabors were randomly scattered, but sometimes 10 of them formed one of two possible target stimuli. Contour-defined loops were formed by a closed chain of gabors that were roughly collinear with the (invisible) outline of the loop ( Figure 1a). Motion-defined loops were formed by gabors at similar locations but with random orientations. Instead they rotated for the brief period during which they were visible ( Figure 1d). To generate the rotations, we displayed every stimulus as a short movie with four video frames, and we did this with all 35 stimuli to keep display times the same. 
Figure 1
 
Target stimuli. Loops consisted of 10 gabors. Three additional gabors were randomly positioned. For contour-defined loops the gabors were roughly collinear. For motion-defined loops, they were randomly oriented and spun in unison around their axes. (a) Contour-defined loop with 100% collinear gabors. (b) Contour-defined loop with 80% collinearity. (c) Contour-defined loop with 60% collinearity. (d) Motion-defined loop.
Figure 1
 
Target stimuli. Loops consisted of 10 gabors. Three additional gabors were randomly positioned. For contour-defined loops the gabors were roughly collinear. For motion-defined loops, they were randomly oriented and spun in unison around their axes. (a) Contour-defined loop with 100% collinear gabors. (b) Contour-defined loop with 80% collinearity. (c) Contour-defined loop with 60% collinearity. (d) Motion-defined loop.
Pre-test
Participants learned to detect the target stimuli in a pre-test phase during which they performed a feature-detection task: They were asked to fixate the central arrow indicating the direction in which to attend and to quickly press a mouse key (<600 ms in Experiments 1 and 2, <400 ms in Experiment 3) whenever a target stimulus appeared. Each trial showed two target stimuli on the attended side (and none on the other), the first at frame 13 (±3 frames), and the second 8 frames (±3 frames) later. Only when both were detected, a trial counted as successfully completed. We used the method of constant stimuli to measure probability of detection as a function of visibility; that is, the degree of collinearity or the amount of rotation of gabors in the contour-defined and motion-defined loops, respectively. Contours and motion appeared in separate blocks of trials, but their location on the left or right side (as predicted by the fixation arrow) was randomly interleaved. Training continued until detection thresholds reached a plateau after 15 or more days of approximately 1-hour sessions. The data from the final three days (∼396 trials) were used to standardize stimulus visibility in the subsequent experiments. 
Experiment 1
To probe influences of feature-based attention on contour perception, we used a dual-task paradigm ( Figure 2). The primary task was the feature-detection task from the pre-test phase based on which we adjusted visibility to a certain degree. This task served to keep the focus of attention on one side, while on the other side we added a secondary task to probe contour perception outside the attentional focus. This secondary task used a two-interval forced-choice paradigm. It presented one single contour-defined loop, called “secondary loop” either together with the first or the second target stimulus of the primary task. This way, solving the secondary task required attending to the primary task. To further ensure that attention was on the primary side, only after successful completion of the primary task participants were asked about the secondary loop. Otherwise, the trial was discarded and recycled. 
Figure 2
 
Experimental paradigm. Two concurrent rapid visual presentation windows displayed the primary (attended) task and the secondary (unattended) task bilaterally of a central fixation arrow. The arrow pointed at the primary task where attention should be allocated. The primary task presented either contour- or motion-defined loops. The secondary task probed for the perception of an object that was either congruent or incongruent with the target objects presented in the primary task.
Figure 2
 
Experimental paradigm. Two concurrent rapid visual presentation windows displayed the primary (attended) task and the secondary (unattended) task bilaterally of a central fixation arrow. The arrow pointed at the primary task where attention should be allocated. The primary task presented either contour- or motion-defined loops. The secondary task probed for the perception of an object that was either congruent or incongruent with the target objects presented in the primary task.
Note that the secondary task always presented contour-defined loops of varying collinearity to map psychometric functions. Experimental conditions differed in terms of the attended feature (contours versus motions) and perceptual load of the primary task (72% detection or “difficult task” versus 95% detection or “easy task”). The four conditions were tested in blocks of 66 trials, and each block randomly interleaved the sides on which the primary and secondary task appeared, respectively. On average 440 trials were collected for each condition and participant. 
Experiment 2
The purpose of Experiment 2 was to reject the possibility that results from Experiment 1 can be accounted for by the size of the attentional spotlight. The procedure was similar to that of Experiment 1, except the secondary stimulus was motion. Furthermore, to demonstrate that the motion detection task required perception of motion rather than perception of contours of motion, in half the trials the secondary motion stimulus consisted of 10 rotating gabors that did not form a loop. Only difficult contour- and motion-defined loops were used as primary task stimuli (72% detection rate). On average, there were 561 trials per condition tested in separate blocks of 44 trials. 
Experiment 3
Experiment 3 assessed the possibility that the feature-based effects observed in Experiments 1 and 2 were due to the fact that contour- and motion-defined loops differed in temporal structure, potentially resulting in differences in perceptual grouping of primary and secondary tasks (Lee & Blake, 1999). To make processing time more similar, we speeded up presentation, and we presented contour-defined or motion-defined loops on the secondary side while on the primary side probability of detecting motion or contours was set to 75%. Each condition was tested 437 times in blocks of 66 trials. 
Data analysis
To estimate the reliability of perceptual thresholds, we used the parametric bootstrap method (e.g., Efron & Tibshirani, 1993). Based on psychometric functions derived from the original data, we simulated 1000 repetitions of the experiments based on the same numbers of trials of the same intensities. From the resulting psychometric functions, we then calculated median thresholds and, as confidence intervals, the 2.5th and 97.5th percentiles of the thresholds. 
Results
The purpose of Experiment 1 was to examine the contribution of feature-based attention to contour integration. We found that all participants were significantly better at perceiving the secondary contour-defined loop when their attention was occupied with the difficult contour detection task (72% detection based on pre-test), as opposed to the difficult motion detection task ( Figure 3, left column). In contrast, with the two easy detection tasks (95% detection), perception of unattended contours was virtually identical ( Figure 3 right column). Primary detection rates were close to those measured during pre-test and did not differ significantly across feature conditions (detection during difficult tasks: AG: 78.8%/77.6%, BS: 71.9%/77.5%, SG: 73.0%/70.8%; detection during easy tasks: AG: 85.9%/97.4%, BS: 88.9%/97.2%, SG: 92.3%/99.2%). Results support the idea that feature-based attention plays a role in contour integration, depending on the perceptual load of the primary task. 
Figure 3
 
Experiment 1. The probability of perceiving the secondary (unattended) loop plotted as a function of collinearity. Psychometric functions are plotted for each observer in both the difficult and easy condition (left and right, respectively) while performing either the primary contour detection task (solid curves, black circles) or the motion detection task (dashed curves, open circles). Superimposed are bootstrap results; diamonds represent median thresholds, and error bars indicate non-parametric confidence intervals, i.e., the thresholds' 2.5th and 97.5th percentile, respectively.
Figure 3
 
Experiment 1. The probability of perceiving the secondary (unattended) loop plotted as a function of collinearity. Psychometric functions are plotted for each observer in both the difficult and easy condition (left and right, respectively) while performing either the primary contour detection task (solid curves, black circles) or the motion detection task (dashed curves, open circles). Superimposed are bootstrap results; diamonds represent median thresholds, and error bars indicate non-parametric confidence intervals, i.e., the thresholds' 2.5th and 97.5th percentile, respectively.
Interestingly, thresholds in the easy condition were very similar to those of the difficult contour detection task. To better illustrate this, Figure 4 plots individual thresholds obtained when the primary detection task was easy as a function of thresholds during difficult detection. For contour detection, data points are close to the diagonal. For primary motion detection, however, the data points are horizontally displaced to the right. So, only in the incongruent feature condition did perceptual load affect “unattended” contour perception. The results suggest an inhibiting, feature-specific mechanism of perceptual load on contour perception. 
Figure 4
 
Experiment 1. Individual perceptual thresholds for the secondary task obtained when the primary detection task was easy as a function of thresholds during difficult detection. Black circles: primary contour detection; open circles: primary motion detection.
Figure 4
 
Experiment 1. Individual perceptual thresholds for the secondary task obtained when the primary detection task was easy as a function of thresholds during difficult detection. Black circles: primary contour detection; open circles: primary motion detection.
However, it is possible that this effect is due to feature-specific differences in the size and shape of the attentional spotlight. Perhaps contour detection was associated with a wider attentional spotlight. Alternatively, participants might have split their attentional spotlight into two (McMains & Somers, 2004), and given the proficiency of human observes in multiple object tracking (Sears & Pylyshyn, 2000), it is possible that this was more likely to happen during detection of object contours rather than motion. Either way, contour detection should yield better “unattended” perception regardless of the feature on that side. To test this possibility, in Experiment 2, we used motion stimuli instead of contours as secondary task. In half the trials, we tested “unattended” perception of motion-defined loops. The other half used rotating gabors that did not form a loop to examine possible effects of the global shape of the motion-defined loops. 
Both participants were better at perceiving “unattended” motion (motion-defined loops and non-loop motion combined; wide black and white bars in Figure 5) when presented together with the motion detection task rather than the contour detection task. Thresholds were very similar regardless of whether the “unattended” rotating gabors did or did not form a loop (thin gray and white bars, Figure 5). At the same time, detection rates for the primary task were similar to those during pre-test and did not differ significantly across feature conditions (during secondary motion-defined loops task: BS: 77.9%/74.7%, MN: 73.3%/73.1%; during secondary motion task: BS: 77.3%/75.9%, MN: 73.8%/67.9%). The results argue against a general advantage of the contour detection task as the primary task. Instead they are consistent with an advantage of feature congruency. The fact that the spatial array of the rotating gabors of the secondary task did not matter suggests that feature congruency as observed here was independent of global shape perception, likely because the shape of motion-defined loops was not particularly salient. 
Figure 5
 
Experiment 2. Perceptual thresholds for motion-defined loops or motion on the secondary task side obtained either during primary contour-defined loop detection or primary motion-defined loop detection (wide black and white bars). Thin gray and white bars show the same data separately for secondary motion-defined loops and non-loop motion. Superimposed are bootstrap results; circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. 1.cdl: primary contour-defined loops; 1.mdl: primary motion-defined loops.
Figure 5
 
Experiment 2. Perceptual thresholds for motion-defined loops or motion on the secondary task side obtained either during primary contour-defined loop detection or primary motion-defined loop detection (wide black and white bars). Thin gray and white bars show the same data separately for secondary motion-defined loops and non-loop motion. Superimposed are bootstrap results; circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. 1.cdl: primary contour-defined loops; 1.mdl: primary motion-defined loops.
Could this effect of feature congruency be due to feature-specific differences in temporal structure? That is, could congruent primary and secondary loops be better perceived because contours were temporally more similar to each other than to motion and vice versa? If so, feature-specific effects should disappear when stimuli were presented faster. However, this is not what we observed in Experiment 3. For secondary contour-defined loops, both participants showed an advantage while attending to contours rather than motion, similar to that observed in Experiment 1; and for secondary motion-defined loops, the difference was substantially larger than in Experiment 2 ( Figure 6). Participants showed no difference in detecting primary contour- or motion-defined loops (during secondary contour-defined loops task: BS: 75.9%/69.8%, MN: 78.5%/67.8%; during secondary motion-defined loops task: BS: 70.8%/76.7%, MN: 75.3%/81.4%). 
Figure 6
 
Experiment 3. Perceptual thresholds for contour- and motion-defined loops on the secondary task side obtained either during primary contour-defined loop detection (black bars) or primary motion-defined loop detection (white bars). Superimposed are bootstrapping results, circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. Note that in panel b the upper percentile is cut off for graphical reasons.
Figure 6
 
Experiment 3. Perceptual thresholds for contour- and motion-defined loops on the secondary task side obtained either during primary contour-defined loop detection (black bars) or primary motion-defined loop detection (white bars). Superimposed are bootstrapping results, circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. Note that in panel b the upper percentile is cut off for graphical reasons.
Discussion
In the present study, we tested whether non-spatial feature-based attention supports contour integration, an important component of object perception. We used a dual-task paradigm to keep the spatial focus of attention fixed on a primary task while a secondary task probed perception on the opposite side. Rapid serial visual presentation was employed to avoid that attention quickly switched between stimuli. We found that participants perceived secondary contour-defined loops better when the concurrent primary task required detection of similar contours as opposed to detection of motion ( Experiment 1). The advantage reversed when the secondary loops where motion-defined, that is, participants' unattended motion perception was better when they performed the motion detection task rather than the contour detection task ( Experiment 2). Furthermore, we found similar feature-congruent effects when we speeded visual presentation up ( Experiment 3). These data argue for feature-based attention as an important factor of object perception. 
Our results are difficult to explain with differences in the focus of attention across feature conditions, even if we cannot rule out such differences. Despite our efforts to keep the focus the same by matching visibility and size of primary task stimuli, attention might for example have formed a wider “spotlight” for primary contour-defined loops so as to be able to detect the global form of the loops, while for primary motion-defined loops participants might have focused on a smaller area of the display to boost motion sensitivity within that area. Alternatively, attention on contours might split more easily into two spotlights (McMains & Somers, 2004), perhaps in line with the fact that human observers are very proficient at tracking multiple objects at the same time (Sears & Pylyshyn, 2000). Either way, such a feature-specific effect on the focus of attention should always favor conditions in which contours are attended. This is inconsistent with the results from Experiment 2
Could participants have shifted attention between primary and secondary tasks and was shifting perhaps easier when primary and secondary tasks were feature-congruent? There is evidence against such a possibility. Duncan, Ward, and Shapiro (1994) have demonstrated that shifting attention requires about 250 ms. That is much longer than the 154-ms presentation time in Experiments 1 and 2. In Experiment 3, we further reduced presentation time to 90 ms. If feature-congruent advantages on the secondary side were due to shifts of attention, the advantage should decline with presentation speed. But it did not. For secondary motion the difference increased, perhaps because the speeded-up display flickered more strongly. Flicker perception activates similar areas as motion perception (Tootell et al., 1995), which might indicate a functional overlap in processing. If so, in the present study motion could have been suppressed together with flicker when attention was on contour-defined loops while more flicker-specific filters would have been required when attention was on motion-defined loops. 
Is it possible that the present feature-congruency effects were due to pre-attentional grouping mechanisms of temporally similar structure (Lee & Blake, 1999)? That is, our feature-congruent stimuli were temporally more similar than feature-incongruent ones. Could this have resulted in better performance due to stronger grouping? We believe not. The feature-congruency effect in Experiment 1 was more prominent for perceptually difficult rather than easy stimuli despite easy motion (i.e., fast rotation) being temporally more different from contours (no rotation) than difficult motion (slow rotation). What is more, speeding up display in Experiment 3 made temporal structure of stimuli more similar but did not reduce the feature-congruency effects. 
Our findings are consistent with previous studies on feature-based attention. For example, Treue and Martinez Trujillo (1999) have shown that attending to motion modulates neural activity in MT and MST in a direction-specific manner, even when the attended stimulus appeared outside the receptive field of the neuron. Saenz and colleagues (2002) confirmed these results using functional imaging. They found that in several early and intermediate visual areas responses to motion or color increased when the same direction of motion or color was attended. Furthermore, in a behavioral study, they demonstrated that feature-based attention modulates perception as well (Saenz et al., 2003). However, these previous reports focused on feature perception on rather intermediate levels. Our study now suggests that feature-based attention contributes to contour integration. 
What is the neural basis of this feature-based attention? Attentional modulation of brain activity has been found at the early stage of the lateral geniculate nucleus (LGN) (O'Connor, Fukui, Pinsk, & Kastner, 2002) where within the parvocellular system neurons show orientation selectivity. Could attention-modulated orientation processing be the neural substrate of the pre-sent feature-specific effects? It is possible that participants chose to perform the contour-detection task by searching for contour parts with certain average orientation. However, since contour-defined loops were curved consisting of elements with multiple orientations, it is unclear whether orientation-based search would have been a sufficiently sensitive strategy. Setting the cut-off criterion for orientation-based contour search too high would have created a substantial amount of false negative errors, and a low cut-off would have introduced false positive errors (for a systematic analysis of orientation-based contour detection, see Hess & Dakin, 1997). Therefore, it appears unlikely that attention to orientation is the sole explanation for our data. 
Since participants were highly trained it is conceivable that they incorporated more efficient strategies based on contour perception. Processes of contour perception activate a net-work of low- and high-level areas (Kourtzi et al., 2003; Mendola et al., 1999), beginning with primary visual cortex which is assumed to be associated with the integration of neighboring segments of smooth, largely continuous contours (e.g., Field et al., 1993; Lee, Mumford, Romero, & Lamme, 1998; Zhou et al., 2000; Li, Piech, & Gilbert, 2006). It is plausible that in the present study these processes were modulated by feature-based attention. 
However, we believe that higher level processes contributed to our results as well: To begin with, there is evidence that contour-related activity in later areas such as the LOC precedes that of earlier areas (Murray et al., 2002). Furthermore, the LOC is responsive to shapes not only with smooth but also with uneven contours (Stanley & Rubin, 2003). Hence, it seems that object perception (Peterson & Gibson, 1993; Zemel et al., 2002), conveyed through feedback (Super et al., 2001), plays a significant role in the perception of contours, in particular under poor visual conditions. In agreement with this, we found feature-based effects on contours perception to occur when contour segments had reduced collinearity (Figure 3). 
What is more, Martinez-Trujillo and Treue (2004) found for motion-sensitive neurons that attention to certain feature values enhances responses of neurons that prefer stimuli with more or less identical feature values and inhibits neurons preferring different feature values. It is fair to assume that other brain areas show similar properties (Maunsell & Cook, 2002). Support for similar attentional mechanisms in human object areas comes from Murray and Wojciulik (2003), who found attention to objects to result in reduced adaptation of BOLD signal, which is indicative of increased neural sensitivity. Critically, however, the present results differed from those of Martinez-Trujillo and Treue in that we only found evidence for inhibition. The lack of facilitation would be rather difficult to explain if feature-based attention originated exclusively from earlier areas that implement perception of orientation or contour segments because contour-defined loops were highly similar on such local levels but differed on a global level. Further research will be necessary to test whether feature-based attention enhances contour perception of similarly shaped contours in further support of a high-level implementation of feature-based attention. 
In sum, our data are consistent with a mechanism of feature-based attention that influences not only earlier and intermediate, but also more complex non-spatial representations of visual input perhaps similar to the feature-specific object shape representations that are observed in monkey IT (Tsunoda, Yamane, Nishizaki, & Tanifuji, 2001). From these areas, activity might cascade back to earlier cortical and subcortical areas. This would be in good agreement with imaging studies that found evidence for entire networks of brain areas being associated with attention to shapes (Corbetta et al., 1990, 1991). 
Interactions with areas outside such networks could be expected as well. That is, the present observation that contour perception is inhibited when attention is on motion leads to the prediction that attending to a particular feature dimension such as motion might not only enhance neural response rates within motion sensitive regions but also suppress responses within other regions sensitive to other visual features such as contours or shapes. 
The fact that inhibition occurred specifically when the motion detection task was perceptually demanding concurs with another line of research on the interactions of task demands and attention (e.g., Lavie & Tsal, 1994; Urbach & Spitzer, 1995). Here we show that this interaction is feature specific, in agreement with Saenz et al. (2003), who found effects of feature-based attention when the attended stimuli were presented together with distractors. In addition, our results suggest task demands have an inhibitory influence. Similar feature-specific task demands are also consistent with Rees, Frith, and Lavie's (1997) finding that a linguistically demanding task reduces functional responses to motion; and feature-specific task demands might explain the seeming contradiction that Pritchard and Warm (1983) observed attentional costs of an (incongruent) working memory task on contour perception, while others found that during visual search (in the absence of an incongruent task) such contours do not require much attention but pop out (Davis & Driver, 1994; Gurnsey et al., 1992). 
Remarkably, feature-incongruent task demands are not always associated with reduced perception. In clear contrast to the results presented here and those of others (e.g., Saenz et al., 2003), Morrone et al. (2002) found contrast discrimination of peripherally presented gratings to be reduced when participants performed a central contrast detection task but only when contrasts of both tasks used the same feature dimension (color or luminance). When the two tasks used feature-incongruent contrasts, the peripheral gratings were accurately perceived no matter whether the central task was preformed concurrently or how difficult it was. This disagreement might be due to differential attentional effects depending on the tested feature dimensions. For example, Morrone et al.'s stimuli, unlike ours, selectively stimulated the magno- or parvocellular system. These two neural systems might access independent attentional resources. As a second possibility, it seems noteworthy that Morrone et al.'s peripheral task required the focus of attention to spread across an area that included the central task. It is possible that such spatial overlap of tasks puts particular strain on attentional control. For example, when attention splits into two peripheral foci, foveal regions tend to be suppressed (McMains & Somers, 2004); this might happen in a feature-specific manner. Either way, to which extent the enclosure of the attentional focus as well as its control depend on feature congruency are interesting questions to investigate in the future. 
Conclusions
To our knowledge, the present study is the first to show that feature-based attention on contours influences processing of objects, thus demonstrating that the concept of non-spatial attentional mechanisms extends to more complex forms of perception that could be implemented not only in low-level but also higher visual areas of the ventral stream, perhaps the LOC. Future research will be required to localize and identify the exact neural mechanisms underlying this effect and its dependence on attentional resources of task demand. 
Acknowledgments
We would like to thank the editor and two anonymous reviewers for their thoughtful comments on an earlier version of our manuscript. This research was supported by the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation, and the Ontario Innovation Trust. 
Commercial relationships: none. 
Corresponding author: Matthias Niemeier. 
Address: 1265 Military Trail, Toronto, Ontario, Canada, M1C 1A4. 
References
Ben-Av, M. B. Sagi, D. Braun, J. (1992). Visual attention and perceptual grouping. Perception & Psychophysics, 52, 277–294. [PubMed] [CrossRef] [PubMed]
Bichot, N. P. Rossi, A. F. Desimone, R. (2005). Parallel and serial neural mechanisms for visual search in macaque area V4. Science, 308, 529–534. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Brefczynski, J. A. DeYoe, E. A. (1999). A physiological correlate of the ‘spotlight’ of visual attention. Nature Neuroscience, 2, 370–374. [PubMed] [Article] [CrossRef] [PubMed]
Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523–547. [PubMed] [CrossRef] [PubMed]
Cant, J. S. Goodale, M. A. (2007). Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cerebral Cortex, 17, 713–731. [PubMed] [CrossRef] [PubMed]
Chelazzi, L. Miller, E. K. Duncan, J. Desimone, R. (1993). A neural basis for visual search in inferior temporal cortex. Nature, 363, 345–347. [PubMed] [CrossRef] [PubMed]
Corbetta, M. Miezin, F. M. Dobmeyer, S. Shulman, G. L. Petersen, S. E. (1990). Attentional modulation of neural processing of shape, color, and velocity in humans. Science, 248, 1556–1559. [PubMed] [CrossRef] [PubMed]
Corbetta, M. Miezin, F. M. Dobmeyer, S. Shulman, G. L. Petersen, S. E. (1991). Selective and divided attention during visual discriminations of shape, color, and speed: Functional anatomy by positron emission tomography. Journal of Neuroscience, 11, 2383–2402. [PubMed] [Article] [PubMed]
Cornelissen, F. W. Peters, E. M. Palmer, J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavioral Research Methods Instruments & Computers, 34, 613–617. [PubMed] [Article] [CrossRef]
Davis, G. Driver, J. (1994). Parallel detection of Kanizsa subjective figures in the human visual system. Nature, 371, 791–793. [PubMed] [CrossRef] [PubMed]
Duncan, J. Ward, R. Shapiro, K. (1994). Direct measurement of attentional dwell time in human vision. Nature, 369, 313–315. [PubMed] [CrossRef] [PubMed]
Efron, B. Tibshirani, R. (1993). An introduction to the bootstrap. New York: Chapman and Hall.
Felleman, D. J. Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. [PubMed] [Article] [CrossRef] [PubMed]
Field, D. J. Hayes, A. Hess, R. F. (1993). Contour integration by the human visual system: Evidence for a local “association field”; Vision Research, 33, 173–193. [PubMed] [CrossRef] [PubMed]
Geisler, W. S. Perry, J. S. Super, B. J. Gallogly, D. P. (2001). Edge co‐occurrence in natural images predicts contour grouping performance. Vision Research, 41, 711–724. [PubMed] [CrossRef] [PubMed]
Gurnsey, R. Humphrey, G. K. Kapitan, P. (1992). Parallel discrimination of subjective contours defined by offset gratings. Perception & Psychophysics, 52, 263–276. [PubMed] [CrossRef] [PubMed]
Hess, R. F. Dakin, S. C. (1997). Nature, 390, 602–604. [PubMed] [CrossRef] [PubMed]
Itti, L. Koch, C. (2001). Computational modelling of visual attention. Nature Reviews, Neuroscience, 2, 194–203. [PubMed] [CrossRef]
James, T. W. Culham, J. Humphrey, G. K. Milner, A. D. Goodale, M. A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: An fMRI study. Brain, 126, 2463–2475. [PubMed] [Article] [CrossRef] [PubMed]
Kourtzi, Z. Tolias, A. S. Altmann, C. F. Augath, M. Logothetis, N. K. (2003). Integration of local features into global shapes: Monkey and human FMRI studies. Neuron, 37, 333–346. [Pubmed] [Article] [CrossRef] [PubMed]
Lavie, N. Ro, T. Russell, C. (2003). The role of perceptual load in processing distractor faces. Psychological Science, 14, 510–515. [PubMed] [CrossRef] [PubMed]
Lee, S. H. Blake, R. (1999). Visual form created solely from temporal structure. Science, 284, 1165–1168. [PubMed] [CrossRef] [PubMed]
Lee, T. S. Mumford, D. Romero, R. Lamme, V. A. (1998). The role of the primary visual cortex in higher level vision. Vision Research, 38, 2429–2454. [PubMed] [CrossRef] [PubMed]
Li, W. Piech, V. Gilbert, C. D. (2006). Contour saliency in primary visual cortex. Neuron, 15, 951–962. [PubMed] [Article] [CrossRef]
Lu, J. Itti, L. (2005). Perceptual consequences of feature–based attention. Journal of Vision, 5, (7):2, 622–631, http://journalofvision.org/5/7/2/, doi:10.1167/5.7.2. [PubMed] [Article] [CrossRef]
Malach, R. Reppas, J. B. Benson, R. R. Kwong, K. K. Jiang, H. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 8135–8139. [PubMed] [Article] [CrossRef] [PubMed]
Martinez-Trujillo, J. C. Treue, S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Current Biology, 14, 744–751. [PubMed] [Article] [CrossRef] [PubMed]
Maunsell, J. H. Cook, E. P. (2002). The role of attention in visual processing. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences, 357, 1063–1072. [PubMed] [Article] [CrossRef]
McAdams, C. J. Maunsell, J. H. (2000). Attention to both space and feature modulates neuronal responses in macaque area V4. Journal of Neurophysiology, 83, 1751–1755. [PubMed] [Article] [PubMed]
McMains, S. A. Somers, D. C. (2004). Multiple spotlights of attentional selection in human visual cortex. Neuron, 42, 677–686. [PubMed] [Article] [CrossRef] [PubMed]
Melcher, D. Papathomas, T. V. Vidnyanszky, Z. (2005). Implicit attentional selection of bound visual features. Neuron, 46, 723–729. [PubMed] [Article] [CrossRef] [PubMed]
Mendola, J. D. Dale, A. Fischl, B. Liu, A. K. Tootell, R. B. (1999). The representation of illusory and real contours in human cortical visual areas revealed by functional magnetic resonance imaging. Journal of Neuroscience, 19, 8560–8572. [PubMed] [Article] [PubMed]
Morrone, M. C. Denti, V. Spinelli, D. (2002). Color and luminance contrasts attract independent attention. Current Biology, 12, 1134–1137. [PubMed] [Article] [CrossRef] [PubMed]
Murray, M. M. Wylie, G. R. Higgins, B. A. Javitt, D. C. Schroeder, C. E. Foxe, J. J. (2002). The spatiotemporal dynamics of illusory contour processing: Combined high-density electrical mapping, source analysis, and functional magnetic resonance imaging. Journal of Neuroscience, 22, 5055–5073. [PubMed] [Article] [PubMed]
Murray, S. O. Wojciulik, E. (2003). Attention increases neural selectivity in the human lateral occipital complex. Nature Neuroscience, 7, 70–74. [PubMed] [CrossRef] [PubMed]
Navalpakkam, V. Itti, L. (2007). Search goal tunes visual features optimally. Neuron, 53, 605–617. [PubMed] [CrossRef] [PubMed]
Niemeier, M. Goltz, H. C. Kuchinad, A. Tweed, D. B. Vilis, T. (2005). A contralateral preference in the lateral occipital area: Sensory and attentional mechanisms. Cerebral Cortex, 15, 325–331. [PubMed] [Article] [CrossRef] [PubMed]
Nothdurft, H. (1993). Faces and facial expressions do not pop out. Perception, 22, 1287–1298. [PubMed] [CrossRef] [PubMed]
O'Connor, D. H. Fukui, M. M. Pinsk, M. A. Kastner, S. (2002). Attention modulates responses in the human lateral geniculate nucleus. Nature Neuroscience, 5, 1203–1209. [PubMed] [Article] [CrossRef] [PubMed]
O'Craven, K. M. Downing, P. E. Kanwisher, N. (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401, 584–587. [PubMed] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Peterson, M. A. Gibson, B. S. (1993). Shape recognition inputs to figure-ground organization in three-dimensional displays. Cognitive Psychology, 25, 383–429. [CrossRef]
Posner, M. I. Snyder, C. R. Davidson, B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology, 109, 160–174. [PubMed] [CrossRef] [PubMed]
Pritchard, W. S. Warm, J. S. (1983). Attentional processing and the subjective contour illusion. Journal of Experimental Psychology: General, 112, 145–175. [PubMed] [CrossRef] [PubMed]
Reddy, L. Wilken, P. Koch, C. (2004). Face–gender discrimination is possible in the near-absence of attention. Journal of Vision, 4, (2):4, 106–117, http://journalofvision.org/4/2/4/, doi:10.1167/4.2.4. [Pubmed] [Article] [CrossRef]
Rees, G. Frith, C. D. Lavie, N. (1997). Modulating irrelevant motion perception by varying attentional load in an unrelated task. Science, 278, 1616–1619. [PubMed] [CrossRef] [PubMed]
Riesenhuber, M. Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2, 1019–1025. [PubMed] [Article] [CrossRef] [PubMed]
Saenz, M. Buracas, G. T. Boynton, G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5, 631–632. [PubMed] [Article] [CrossRef] [PubMed]
Saenz, M. Buracas, G. T. Boynton, G. M. (2003). Global feature-based attention for motion and color. Vision Research, 43, 629–637. [ PubMed] [CrossRef] [PubMed]
Sears, C. R. Pylyshyn, Z. W. (2000). Multiple object tracking and attentional processing. Canadian Journal of Experimental Psychology, 54, 1–14. [PubMed] [CrossRef] [PubMed]
Shulman, G. L. (1992). Attentional modulation of a figural aftereffect. Perception, 21, 7–19. [PubMed] [CrossRef] [PubMed]
Simoncelli, E. P. Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24, 1193–1216. [PubMed] [CrossRef] [PubMed]
Stanley, D. A. Rubin, N. (2003). fMRI activation in response to illusory contours and salient regions in the human lateral occipital complex. Neuron, 37, 323–331. [PubMed] [CrossRef] [PubMed]
Super, H. Spekreijse, H. Lamme, V. A. (2001). Two distinct modes of sensory processing observed in monkey primary visual cortex (V1. Nature Neuroscience, 4, 304–310. [PubMed] [Article] [CrossRef] [PubMed]
Suzuki, S. (2001). Attention-dependent brief adaptation to contour orientation: A high-level aftereffect for convexity? Vision Research, 41, 3883–3902. [PubMed] [CrossRef] [PubMed]
Tootell, R. B. Reppas, J. B. Kwong, K. K. Malach, R. Born, R. T. Brady, T. J. (1995). Journal of Neuroscience, 15, 3215–3230. [PubMed] [Article] [PubMed]
Treisman, A. M. Gelade, G. (1980). Cognitive Psychology, 12, 97–136. [PubMed] [CrossRef] [PubMed]
Treue, S. Martinez Trujillo, J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399, 575–579. [PubMed] [CrossRef] [PubMed]
Tsunoda, K. Yamane, Y. Nishizaki, M. Tanifuji, M. (2001). Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nature Neuroscience, 4, 832–838. [PubMed] [Article] [CrossRef] [PubMed]
Urbach, D. Spitzer, H. (1995). Attentional effort modulated by task difficulty. Vision Research, 35, 2169–2177. [PubMed] [CrossRef] [PubMed]
Wojciulik, E. Kanwisher, N. Driver, J. (1998). Covert visual attention modulates face-specific activity in the human fusiform gyrus: FMRI study. Journal of Neurophysiology, 79, 1574–1578. [PubMed] [Article] [PubMed]
Yeh, S. L. Chen, I. P. De Valois, K. K. De Valois, R. L. (1996). Figural aftereffects and spatial attention. Journal of Experimental Psychology: Human Perception and Performance, 22, 446–460. [PubMed] [CrossRef] [PubMed]
Zemel, R. S. Behrmann, M. Mozer, M. C. Bavelier, D. (2002). Experience-dependent perceptual grouping and object-based attention. Journal of Experimental Psychology: Human Perception and Performance, 28, 202–217 [CrossRef]
Zhou, H. Friedman, H. S. von der Heydt, R. (2000). Coding of border ownership in monkey visual cortex. Journal of Neuroscience, 20, 6594–6611. [PubMed] [Article] [PubMed]
(1994). Perception & Psychophysics, 56,. [.
Figure 1
 
Target stimuli. Loops consisted of 10 gabors. Three additional gabors were randomly positioned. For contour-defined loops the gabors were roughly collinear. For motion-defined loops, they were randomly oriented and spun in unison around their axes. (a) Contour-defined loop with 100% collinear gabors. (b) Contour-defined loop with 80% collinearity. (c) Contour-defined loop with 60% collinearity. (d) Motion-defined loop.
Figure 1
 
Target stimuli. Loops consisted of 10 gabors. Three additional gabors were randomly positioned. For contour-defined loops the gabors were roughly collinear. For motion-defined loops, they were randomly oriented and spun in unison around their axes. (a) Contour-defined loop with 100% collinear gabors. (b) Contour-defined loop with 80% collinearity. (c) Contour-defined loop with 60% collinearity. (d) Motion-defined loop.
Figure 2
 
Experimental paradigm. Two concurrent rapid visual presentation windows displayed the primary (attended) task and the secondary (unattended) task bilaterally of a central fixation arrow. The arrow pointed at the primary task where attention should be allocated. The primary task presented either contour- or motion-defined loops. The secondary task probed for the perception of an object that was either congruent or incongruent with the target objects presented in the primary task.
Figure 2
 
Experimental paradigm. Two concurrent rapid visual presentation windows displayed the primary (attended) task and the secondary (unattended) task bilaterally of a central fixation arrow. The arrow pointed at the primary task where attention should be allocated. The primary task presented either contour- or motion-defined loops. The secondary task probed for the perception of an object that was either congruent or incongruent with the target objects presented in the primary task.
Figure 3
 
Experiment 1. The probability of perceiving the secondary (unattended) loop plotted as a function of collinearity. Psychometric functions are plotted for each observer in both the difficult and easy condition (left and right, respectively) while performing either the primary contour detection task (solid curves, black circles) or the motion detection task (dashed curves, open circles). Superimposed are bootstrap results; diamonds represent median thresholds, and error bars indicate non-parametric confidence intervals, i.e., the thresholds' 2.5th and 97.5th percentile, respectively.
Figure 3
 
Experiment 1. The probability of perceiving the secondary (unattended) loop plotted as a function of collinearity. Psychometric functions are plotted for each observer in both the difficult and easy condition (left and right, respectively) while performing either the primary contour detection task (solid curves, black circles) or the motion detection task (dashed curves, open circles). Superimposed are bootstrap results; diamonds represent median thresholds, and error bars indicate non-parametric confidence intervals, i.e., the thresholds' 2.5th and 97.5th percentile, respectively.
Figure 4
 
Experiment 1. Individual perceptual thresholds for the secondary task obtained when the primary detection task was easy as a function of thresholds during difficult detection. Black circles: primary contour detection; open circles: primary motion detection.
Figure 4
 
Experiment 1. Individual perceptual thresholds for the secondary task obtained when the primary detection task was easy as a function of thresholds during difficult detection. Black circles: primary contour detection; open circles: primary motion detection.
Figure 5
 
Experiment 2. Perceptual thresholds for motion-defined loops or motion on the secondary task side obtained either during primary contour-defined loop detection or primary motion-defined loop detection (wide black and white bars). Thin gray and white bars show the same data separately for secondary motion-defined loops and non-loop motion. Superimposed are bootstrap results; circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. 1.cdl: primary contour-defined loops; 1.mdl: primary motion-defined loops.
Figure 5
 
Experiment 2. Perceptual thresholds for motion-defined loops or motion on the secondary task side obtained either during primary contour-defined loop detection or primary motion-defined loop detection (wide black and white bars). Thin gray and white bars show the same data separately for secondary motion-defined loops and non-loop motion. Superimposed are bootstrap results; circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. 1.cdl: primary contour-defined loops; 1.mdl: primary motion-defined loops.
Figure 6
 
Experiment 3. Perceptual thresholds for contour- and motion-defined loops on the secondary task side obtained either during primary contour-defined loop detection (black bars) or primary motion-defined loop detection (white bars). Superimposed are bootstrapping results, circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. Note that in panel b the upper percentile is cut off for graphical reasons.
Figure 6
 
Experiment 3. Perceptual thresholds for contour- and motion-defined loops on the secondary task side obtained either during primary contour-defined loop detection (black bars) or primary motion-defined loop detection (white bars). Superimposed are bootstrapping results, circles represent median thresholds, and error bars indicate the thresholds' 2.5th and 97.5th percentile, respectively. Note that in panel b the upper percentile is cut off for graphical reasons.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×