Free
Article  |   January 2014
Feature-based attention is independent of object appearance
Author Affiliations
  • Guobei Xiao
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Guotong Xu
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Xiaoqing Liu
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Jingying Xu
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Fang Wang
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Li Li
    Department of Ophthalmology of Shanghai Tenth People's Hospital and Tongji Eye Institute, Tongji University School of Medicine, Shanghai, China
  • Laurent Itti
    Computer Science Department and Neuroscience Graduate Program, University of Southern California
    itti@usc.edu
  • Jianwei Lu
    Department of Ophthalmology of Shanghai Tenth People's Hospital, The Advanced Institute of Translational Medicine and School of Software Engineering, Tongji University, Shanghai, China
    jwlu33@gmail.com
Journal of Vision January 2014, Vol.14, 3. doi:https://doi.org/10.1167/14.1.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Guobei Xiao, Guotong Xu, Xiaoqing Liu, Jingying Xu, Fang Wang, Li Li, Laurent Itti, Jianwei Lu; Feature-based attention is independent of object appearance. Journal of Vision 2014;14(1):3. https://doi.org/10.1167/14.1.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  How attention interacts with low-level visual representations to give rise to perception remains a central yet controversial question in neuroscience. While several previous studies suggest that the units of attentional selection are individual objects, other evidence points instead toward lower-level features, such as an attended color or direction of motion. We used both human fMRI and psychophysics to investigate the relationship between object-based and feature-based attention. Specifically, we focused on whether feature-based attention is modulated by object appearance, comparing three conditions: (a) features appearing as one object; (b) features appearing as two separate but identical objects; (c) features appearing as two different objects. Stimuli were two random-dot fields presented bilaterally to central fixation, and object appearance was induced by the presence of one or two boxes around the fields. In the fMRI experiment, participants performed a luminance discrimination task on one side, and ignored the other side, where we probed for enhanced activity when either it was perceived as belonging to a same object, or shared features with the task side. In the psychophysical experiments, participants performed luminance discrimination on both sides with overlapping red and green dots, now attending to either the same features (red/red or green/green) or different features (red/green or green/red) on both sides. Results show that feature-based attentional enhancement exists in all three conditions, i.e., regardless whether features appear as one object, two identical objects, or two different objects. Our findings indicate that feature-based attention differs from object-based attention in that it is not dependent upon object appearance. Thus feature-based attention may be mediated by earlier cortical processes independent of perceiving visual features into well-formed objects.

Introduction
Attentional selection has at least three different facets: Spatial selection which enhances a spotlight at the position of attention (Brefczynski & DeYoe, 1999; Chawla, Rees, & Friston, 1999; Crick, 1984; Itti & Koch, 2001; Itti, Koch, & Braun, 2000; Lee, Itti, Koch, & Braun, 1999; Motter, 1994; Treue & Maunsell, 1996), object selection whereby objects are selected as an entity (Blaser, Pylyshyn, & Holcombe, 2000; Desimone & Duncan, 1995; Duncan, 1984; O'Craven, Downing, & Kanwisher, 1999; Serences, Schwarzbach, Courtney, Golay, & Yantis, 2004), and feature-based selection whereby attention enhances attended features throughout visual cortex (Andersen, Muller, & Hillyard, 2009; Beauchamp, Cox, & DeYoe, 1997; McAdams & Maunsell, 2000; Saenz, Buracas, & Boynton, 2002; Treue & Martinez Trujillo, 1999). Understanding how one may reconcile these three facets into a unified operational theory of attentional selection is a pressing yet largely unsolved research challenge. 
Feature-based attention has been originally demonstrated as an increase in the gain of neural responses in macaque MT, for direction-selective cells whose preferred feature matched the attended feature although their receptive fields did not necessarily overlap with attention (Treue & Martinez Trujillo, 1999). Human feature-based attention was then observed as a global BOLD-fMRI enhancement of attended visual features throughout early visual cortices (V1 to MT; Saenz et al., 2002), also supported by other studies (Liu & Mance, 2011; Lu & Itti, 2005; Mendoza, Schneiderman, Kaul, & Martinez-Trujillo, 2011; Saenz, Buracas, & Boynton, 2003; Sohn, Papathomas, Blaser, & Vidnyanszky, 2004). Further study showed that feature-based attention can even spread to empty regions of visual space that do not contain stimuli (Serences & Boynton, 2007). 
On the other hand, there is broad psychophysical and physiological evidence supporting that attentional selection takes place at the object level. Psychophysical studies show significantly better discrimination performance on two features of one versus two distinct objects (Blaser et al., 2000; Duncan, 1984; Duncan & Nimmo-Smith, 1996; Reynolds & Heeger, 2009). Attentional selection of a particular feature of one object also enhances processing of that object's other features (Sohn et al., 2004), even when these other features are irrelevant to the task that engages attention (Lu & Itti, 2005). This object-based attentional selection can even enhance unconscious features of an object (Melcher, Papathomas, & Vidnyanszky, 2005), and can modulate firing rate even in areas poorly tuned for the attended feature (Katzner, Busse, & Treue, 2009). Comparative studies provide evidence that attentional selection biases seem mediated more at the object level than at the feature level: For example, attention can rapidly track an object through feature space, even when distractors occupy the location of the attended object (Blaser et al., 2000). 
A common aspect of feature-based and object-based attention is that both can enhance visual features outside the attentional selection window. The essential difference is that object-based attention only enhances features that belong to the one object that engages attention. A few recent studies have started to discriminate feature-based attention from object-based attention at the feature level (Festman & Braun, 2010; Wegener, Ehn, Aurich, Galashan, & Kreiter, 2008). For example, it has been proposed that feature-based attention might engage a mechanism different from object-based attention with active suppression of nonrelevant features (Wegener et al., 2008). On the other hand, other studies have suggested that feature-based attention might modulate perception just by contributing to object perception (Stojanoski & Niemeier, 2007). However, how feature-based attention can be modulated at the object level remains unclear. Specifically, does the object organization of features modulate the enhancement of feature-based attention and, if yes, how does it work (for example, does it depend on object appearance)? In theory, is the low-level feature attentional enhancement modulated by higher object-level cortical processing and how? 
In previous feature-based attention studies that showed feature-based attentional enhancement at an unattended place (Lu & Itti, 2005; Saenz et al., 2002; Treue & Martinez Trujillo, 1999), two distant stimuli (one attended and one ignored) were displayed over a uniform background. With such a configuration, some participants reported a tendency to perceptually group both stimuli into one object, when both stimuli shared a visual feature (Lu & Itti, 2005). This raises two questions: (a) Since object-based attention enhances the cortical representation of features that belong to the attended object, could feature-based attentional enhancement be due to grouping of like features into coherent objects, and thus be derived from object-based attention? Or, more directly, does feature-based attentional enhancement still exist when features are perceived as belonging to distinct objects? (b) How is feature-based attentional enhancement modulated by object appearance, or, does feature-based attentional enhancement still exist when features appear to belong to distinct objects with different appearance? To answer these questions, we used both functional magnetic resonance imaging (fMRI) and psychophysical methods to study feature-based attentional enhancement effect under different conditions of object perception. The results suggest that feature-based attention differs from object-based attention in that enhancement exists not only when the features appear as distinct objects, but also independent on the object appearance in which the features are embedded. 
General methods
Both functional neuroimaging (fMRI) and psychophysics were used to study the feature-based attentional enhancement reported in Saenz et al. (2002, 2003), but under three different object appearance conditions: (a) the features appeared as one object; (b) the features appeared in two separate but identical objects (with same stimuli and object shapes); (c) the features appeared in two distinct objects with different shape appearance. In the fMRI study, our stimuli are consistent with Saenz et al. (2002), with overlapping green/red dots at the attended side and red dots at the ignored side. Participants performed a luminance discrimination task on the attended side. In the psychophysical studies, stimuli are overlapping red/green dots and the tasks are two-interval forced-choice (2-IFC) luminance discrimination dual-tasks (one task on each side). 
Experiment 1
Methods
Here using functional neuroimaging (fMRI), we duplicated the color dots feature-based attention experiment of Saenz et al. (2002), but adding two conditions: Either both the attended and the unattended parts of the displays appeared to belong to one object, or they appeared to belong to two objects. We hypothesized that if attentional enhancement is processed at the object level, feature enhancement should be reduced or disappear when the features appear to belong to two separate objects. In contrast, if feature-based attention is distinct from object-based attention, enhancement may still exist even when both stimuli in the display appear as two objects. Our results below show that the effect of feature enhancement consistently exists in both the one-object condition and the two-object condition, actually with even larger enhancement in the two-object condition. 
Participants
Five adults participated in the fMRI experiments (three male). One was excluded because of technical problems during scanning, leaving four participants for final analysis. Before real data collection, more than 1,000 training trials were completed by each participant, until the performance reached a stable level to erase any perceptual learning effect. The study was approved by the University of Southern California Institutional Review Board. Written informed consent was obtained from all participants. 
Stimuli and task
Stimuli were two fields of randomly distributed stationary dots presented bilaterally to a central fixation cross (Figure 1). At the attended side, the display consisted of overlapping red and green dots (120 dots respectively). The ignored side always contained 120 red dots. In the one-object condition, the two fields of dots were displayed in a single gray box appearing on top of a textured background. In the two-object condition, they were displayed respectively in two small boxes on the same textured background (Figure 1). A block design was used to study the BOLD activity at the ignored side for four different conditions: (a) Participants attended to the same feature (red dots) on the attended side as was presented on the ignored side, when the two dots fields appeared as one object (OOSF; one object same feature). (b) Participants attended to a different feature (green dots) on the attended side in the one-object condition (OODF; one object different features). (c) Participants attended to the same feature as presented on the ignored side when the two dot fields appeared as two objects (TOSF; two objects same feature). (d) Participants attended to different features in the two-object condition (TODF). A total of 396 to 528 trials (36 to 48 blocks of 11 trials each) were acquired across all four conditions, for each of the four participants. The left and right sides were attended with equal times by a blank cue trial to direct subjects to the attended side. Prior to the task, a region of interest (ROI) corresponding to the cortical representation of the ignored stimulus was defined for each individual participant by using a localizer stimulus (Figure 1d). The blocks are always displayed in the same order as OOSF-OODF-TOSF-TODF for analysis; a general linear model was fit within the region of interest. 
Figure 1
 
Experimental stimuli and frame sequences in fMRI study. (a) The two-interval force-choice task frame sequences in the one-object condition. Within one TR (time of repetition = 3 s) period, the first stimulus frame was displayed for 350 ms, followed by a 150 ms interframe with same background but no dot stimuli (blank frame). Then the second stimulus frame appeared for 350 ms and finally the blank frame was showed for 2150 ms for the participants to respond whether the first or second stimulus frame had higher luminance for either red dots or green dots at the attended side (left side in the Figure). (b) Closeup of the shadow effect of the stimuli. The shadow effect was made with the same background texture but less luminance (60% of the textured background luminance) to enhance the objectness visualization. (c) The experimental stimuli in the two-object condition. The frame sequences are identical to those in the one-object condition shown in (a). (d) The localizer stimulus. The checker stimuli were made of 3 × 3 square frames in each of which the red and green square flashed at 5 Hz frequency.
Figure 1
 
Experimental stimuli and frame sequences in fMRI study. (a) The two-interval force-choice task frame sequences in the one-object condition. Within one TR (time of repetition = 3 s) period, the first stimulus frame was displayed for 350 ms, followed by a 150 ms interframe with same background but no dot stimuli (blank frame). Then the second stimulus frame appeared for 350 ms and finally the blank frame was showed for 2150 ms for the participants to respond whether the first or second stimulus frame had higher luminance for either red dots or green dots at the attended side (left side in the Figure). (b) Closeup of the shadow effect of the stimuli. The shadow effect was made with the same background texture but less luminance (60% of the textured background luminance) to enhance the objectness visualization. (c) The experimental stimuli in the two-object condition. The frame sequences are identical to those in the one-object condition shown in (a). (d) The localizer stimulus. The checker stimuli were made of 3 × 3 square frames in each of which the red and green square flashed at 5 Hz frequency.
To enhance the visualization of the dot stimuli, each dot was enlarged to a disk of diameter four pixels (display resolution 640 × 480). The distance between any two disks was restricted to a minimal value so that dots would not overlap and would spread into the whole field space. The baseline luminance on both sides was randomized across trials (but between 200 to 255 to ensure that the dots were bright enough, i.e., for red dots the RGB value was (200−255, 0, 0), and for green dots the RGB value was (0, 200−255, 0), so that participants could not compare luminance between both sides to carry out the luminance increment task on the attended side (see below). The size of the dot fields was 4° of visual angle centered at 6° eccentricity from fixation. In both the one-object and two-object conditions, a shadow effect was cast around every box to enhance the objectness visualization. The shadow was composed of the same texture as the background but with less luminance. Figure 1d shows the fMRI localizer stimulus, which was a 3 × 3 red-green flashing checkbox. The flashing frequency was set to 5 Hz to trigger strong activity of the region of interest (ROI) in the participants' visual cortex. 
Participants were instructed to perform a two-interval forced-choice (2IFC) luminance discrimination task on either the red dots or the green dots at the attended side, while maintaining fixation onto the central cross and ignoring the ignored side. Fixation was ensured by the short display time, and also confirmed by later localizer analysis. (We assumed that if subjects had been moving their eyes, the localized ROIs would not be well defined and would be less symmetric between the left and right hemispheres than we observed.) There was no task on the ignored side. In every 2IFC trial, participants reported whether a higher luminance occurred first or a lower luminance occurred first. The initial luminance difference between the higher luminance and the lower luminance was set to the threshold level (75% correct percentage) by pretraining the participants individually, so that the task was hard enough to engage strong attention on the attended side. The task difficulty during scanning was controlled by a staircase to maintain the threshold level of the luminance discrimination. Scanning sessions were selected as valid scans if the participants' psychophysical performance remained close to their individual threshold levels, which was the case for all scans and all participants. This ensured that the participants' attention was engaged on the attended side. 
fMRI scanning and analysis
The fMRI data were acquired on a Siemens 3T MRI scanner (MAGNETOM Trio). BOLD-sensitive images were collected by an echo-planar imaging (EPI) sequence with a repetition time (TR) of 3 s. We recorded fMRI activity using 17 slices to cover all early visual cortex (3 × 3 × 3 mm voxels, TE = 30 ms). Before the functional scanning, an MPRAGE structural scan of the whole brain was acquired using 192 slices (1 × 1 × 1mm voxels). Each of the four blocks (OOSF, OODF, TOSF, and TODF) contained a 30-s task period (10 scans) plus a 3-s message period at the beginning for task instructions (which instructed the participants to either attend to red or attend to green). The task changed every 30 s to avoid fMRI adaptation. The fMRI data were preprocessed (mean intensity adjustment, slice scan time correction, 3D motion correction, spatial smoothing, and temporal filter) using the Brain Voyager QX software (Brain Innovation BV, Maastricht, The Netherlands). The general linear model that included four experimental regressors (OOSF, OODF, TOSF, and TODF) based on a standard canonical HRF was employed to estimate the beta weight magnitude of BOLD response for each of the four block conditions. 
Results
Figure 2 shows a typical region of interest selected for one of the participants. The region of interest was chosen by computing linear correlation maps when displaying the localizer stimuli versus displaying only the background and boxes without stimuli. A minimal p < 0.00001 was used to define the region of interest for each participant. The ROI size of each participant varied from 2,455 voxels to 3,572 voxels (1 × 1 × 1 mm voxels). 
Figure 2
 
The left-bottom panel shows the fMRI response change on the ignored side when participants attended to the same feature and to the different feature, in both the one-object and the two-object conditions, averaged over all blocks for all four participants. The yellow periods are for the one-object condition. The blue periods are for the two-object condition (OOSF: participants attended to the same feature in the one-object condition; OODF: participants attended to a different feature in the one-object condition; TOSF: participants attended to the same feature in the two-object condition; TODF: participants attended to a different feature in the two-object condition). The Figure indicates that in the one-object condition the fMRI BOLD signal is higher on the ignored side when participants attended to same feature (red, OOSF) than when they attended to a different feature (green, OODF). In the two-object condition, the fMRI signal is also higher when participants attended the same feature (red, TOSF) than to a different feature (green, TODF). This Figure also shows sagittal (top-left), coronal (top-right) and transverse (bottom-right) sections of the region of interest (ROI) for one participant. The detailed parameters of the ROI can be referred in the result.
Figure 2
 
The left-bottom panel shows the fMRI response change on the ignored side when participants attended to the same feature and to the different feature, in both the one-object and the two-object conditions, averaged over all blocks for all four participants. The yellow periods are for the one-object condition. The blue periods are for the two-object condition (OOSF: participants attended to the same feature in the one-object condition; OODF: participants attended to a different feature in the one-object condition; TOSF: participants attended to the same feature in the two-object condition; TODF: participants attended to a different feature in the two-object condition). The Figure indicates that in the one-object condition the fMRI BOLD signal is higher on the ignored side when participants attended to same feature (red, OOSF) than when they attended to a different feature (green, OODF). In the two-object condition, the fMRI signal is also higher when participants attended the same feature (red, TOSF) than to a different feature (green, TODF). This Figure also shows sagittal (top-left), coronal (top-right) and transverse (bottom-right) sections of the region of interest (ROI) for one participant. The detailed parameters of the ROI can be referred in the result.
The fMRI response time series for the ignored stimulus after averaging all the scans of the four participants for each 132-s time period is plotted at the left-bottom of Figure 2. In the one-object condition, the fMRI signal response to the ignored stimulus was stronger when participants attended to the same feature (OOSF) compared to when they attended to a different feature (OODF) (p < 0.05 shown in Figure 3a averaged across four subjects). This observation confirms the previously reported feature-based attentional enhancement (Saenz et al., 2002). Interestingly, in the two-object condition, the fMRI response enhancement to the ignored stimulus when participants attended to the same feature (TOSF) compared to a different feature (TODF) still existed, and even showed a larger effect than in the one-object condition (p < 0.008 shown in Figure 3a averaged across four subjects). This result indicates that the enhancement modulated by feature-based attention existed even when the two stimuli appeared as two objects, suggesting that feature-based attentional enhancement is not object-based. The following further analyses confirmed this conclusion. 
Figure 3
 
Overall averaged fMRI results from all four participants in Experiment 1. The left panel shows the fMRI response (beta weight average) to the ignored stimuli and the right panel shows the fMRI response to the attended stimuli (symbols of OOSF, OODF, TOSF, and TODF are identical with Figure 1 and Figure 2). Results indicate in both the one-object and the two-object conditions the fMRI response to the ignored stimuli is significantly higher when participants attended to the same feature than when they attended to a different feature (left). As a control, the fMRI response exhibits no significant enhancement at the attended stimuli when attending to the same feature versus a different feature (t tests, ** indicates p < 0.008, * indicates p < 0.05, “n.s.” indicates no significant enhancement (p ≥ 0.1).
Figure 3
 
Overall averaged fMRI results from all four participants in Experiment 1. The left panel shows the fMRI response (beta weight average) to the ignored stimuli and the right panel shows the fMRI response to the attended stimuli (symbols of OOSF, OODF, TOSF, and TODF are identical with Figure 1 and Figure 2). Results indicate in both the one-object and the two-object conditions the fMRI response to the ignored stimuli is significantly higher when participants attended to the same feature than when they attended to a different feature (left). As a control, the fMRI response exhibits no significant enhancement at the attended stimuli when attending to the same feature versus a different feature (t tests, ** indicates p < 0.008, * indicates p < 0.05, “n.s.” indicates no significant enhancement (p ≥ 0.1).
The results of individual participants consistently illustrate the nonobject-based nature of feature-based attention (Figure 4). The beta weight difference between conditions when participants attended to the same feature as on the ignored side and when they attended to a different feature are plotted for each participant after a general linear model (GLM) fitting procedure. To simplify the exposition, here only the beta weight difference is plotted. The absolute value of the beta weight in each condition is plotted in the following overall average results (Figure 3). The GLM was fitted to the data for each participant by combining all the scans and considering only voxels within each participant's individual region of interest. To perform two-sample t tests, we also fitted the GLM on every session of each participant (9−12 sessions per participant). Each session is one single cycle of the four conditions (OOSF, OODF, TOSF, and TODF). The t tests were computed on the beta weights collected from the GLM fits on each session, comparing same-feature and different-feature in both the one-object and the two-object conditions. In the worst case, a borderline p < 0.08 was obtained (see Figure 4) for two participants in the one-object condition, while all other conditions showed significant differences. In sum, the difference between same feature and different feature in both one-object conditions and two-object conditions were significant for six out of eight conditions. This individual-participant effect is consistent with previous fMRI studies which revealed feature-based attentional enhancement with a blank black background (Saenz et al., 2002). More importantly, we also observed feature-based attentional enhancement in the two-object condition for all participants, with the enhancement even larger (plotted as blue bars in Figure 4). Statistical tests in the two-object conditions indicated that all participants showed a significant effect in the beta weight difference when they attended the same feature versus a different feature (t test, p < 0.05 at worst). The GLM beta weights were obtained by using 396 to 528 scan points for each participant. These results indicate that the feature-based attentional effect exists consistently across participants, in both the one-object condition and two-object condition, arguing again in favor of the hypothesis that feature-based attention is not abolished when both sides of the display are perceived as two distinct objects. 
Figure 4
 
In Experiment 1, the fMRI signal response to the ignored stimuli shown as beta weight magnitude difference (after fitting the GLM) when participants attended the same feature minus attended a different feature in both the one-object and two-object conditions (OOSF minus OODF, TOSF minus TODF). Identification of OOSF, OODF, TOSF, and TODF is same with Figure 1. For all participants, the fMRI beta weight difference corresponding to the ignored stimuli between same feature and different feature are significantly or borderline higher than zero (two-sample t test: *** indicates p < 0.01, ** indicates p < 0.05, * indicates borderline p < 0.08) in both the one-object and two-object conditions, suggesting that a feature-based attention effect exists in both conditions, regardless whether features appear as one object or two objects.
Figure 4
 
In Experiment 1, the fMRI signal response to the ignored stimuli shown as beta weight magnitude difference (after fitting the GLM) when participants attended the same feature minus attended a different feature in both the one-object and two-object conditions (OOSF minus OODF, TOSF minus TODF). Identification of OOSF, OODF, TOSF, and TODF is same with Figure 1. For all participants, the fMRI beta weight difference corresponding to the ignored stimuli between same feature and different feature are significantly or borderline higher than zero (two-sample t test: *** indicates p < 0.01, ** indicates p < 0.05, * indicates borderline p < 0.08) in both the one-object and two-object conditions, suggesting that a feature-based attention effect exists in both conditions, regardless whether features appear as one object or two objects.
Figure 3a shows the overall averaged fMRI BOLD response to the ignored stimulus across all participants after the GLM fitting procedure. As a control, the final participant-averaged BOLD response to the attended stimulus was also plotted in Figure 3b. There was significant enhancement when attending to the same feature compared to attending to different features, both in the one-object condition and in the two-object condition on the ignored side (t test performed on 39−52 data points on each condition from all the four participants showed p < 0.05 in the one-object condition and p < 0.008 in the two-object condition). On the other hand and as a control, the fMRI response on the attended side did not enhance significantly when participants attended to the same feature compared to when they attended to different features, for both conditions (p ≥ 0.1 in both conditions by same t test as above), indicating that attention was equally engaged on to the attended task in all the conditions (Figure 3b). The control fMRI response results to the attended stimulus were consistent with previous studies of feature-based attention (Saenz et al., 2002) and provide evidence that attentional enhancement in our study was identical with the previously revealed feature-based attention results. In summary, our fMRI results showed that feature-based attentional enhancement exists both in the one-object condition and in the two-object condition, suggesting that the feature-based attention is not dependent on the object appearance whether features appear as one object or two distinct objects. 
Experiment 2
Here using psychophysics, we duplicated the color dots feature-based attention experiment of Saenz et al. (2003) under two conditions: Either the attended features belonged to one object, or they appeared to belong to two objects. 
Methods
Participants
Three adults participated in this experiment (two male and one female). All participants (with normal visual acuity and color vision) had given written, informed consents in Tongji University. Before real data collection, each participant completed more than 1,000 trials until performance reached a stable level to erase perceptual learning effect. 
Stimuli and task
The general method is similar to the previous experiment, except that participants performed dual tasks and both sides contained overlapping color dots. Participants were seated at a viewing distance of 40 cm from a 22-in. color monitor (Philips Brilliance 220SW9 LCD monitor, 72Hz, Philips, Eindhoven, The Netherlands) and rested on a chin-rest. Mean screen luminance was 30 cd/m2 and room luminance was 4 cd/m2. Stimuli were displayed at 8° eccentricity from fixation. Each side of the display was composed of two overlapping fields of 120 red and 120 green stationary random dots. The dots also had limited lifetimes (200 ms) and appeared to flicker, like in the first experiment. Participants were instructed to perform a 2-IFC luminance discrimination task on both sides at the same time. During each 2-IFC trial, the task was to report whether or not a luminance change occurred between the two intervals for each of the two attended fields of dots. There were four equally probable responses: change (on left)/change (on right), change/no change, no change/change, or no change/no change. Two different luminance baselines (RGB value 205 or 180) were selected to avoid comparison of sides. For example, if on one side the RGB value of red or green dots was (205, 0, 0), then the other side would be set to (180, 0, 0). By carrying out preliminary experiments to test task difficulty, we chose luminance difference at the performance of about average 60% correct percentage for participants to perform the real experimental tasks. (For baseline value 205, the increment is 205/255; for baseline 180, the increment is 180/230.) Finally we always fixed the chosen luminance difference in the real tasks and measured the correct percentage for all the participants and tasks for comparison. 
Both one-object and two-object conditions were studied separately, and each condition contained a total of 1,440 trials for each participant. Each participant carried out the discrimination task for 10 successive days and each day performed 144 trials of one-object condition and 144 trials of two-object condition (a total of 288 trials in one session). During the task, attention was divided across two fields of dots and engaged onto the same feature (both red or both green) or onto different features (one red and one green). Over every 144 trials, four conditions were tested: attention engaged on red (on left)/red (on right), green/green, red/green, and green/red. Each condition appeared randomly and only once (therefore the occurrence of same feature and different features were equal.). During each block of 36 trials, there were equally and randomly distributed number of change and no-change trials. Both sides followed the above rules independently. 
Results
As shown in Figure 5, participants performed the dual-task on two fields of dots in either the one-object or the two-object conditions separately with same or different features. Performance for all three subjects (KS, WY, and MH) was significantly better (higher correct percentage) with the same feature than with different features, both in the one-object and two-objects conditions (p < 0.01 for each participant, N = 1,440 trials/participant, shown in Figure 5c). Feature-based attentional enhancement scales across the one-object condition and two-objects condition are similar (not as in the fMRI, the two-object condition has larger enhancement). Subjectively, the dual-task with different features was reported to be much harder than with same feature by the participants. 
Figure 5
 
Psychophysical stimuli and results in Experiment 2. Experiment 2 compared the feature-based attentional enhancement in one-object condition with that in two-objects condition. The 2-IFC luminance discrimination task is the same as with the fMRI experiment shown in Figure 1, but the timing of the stimulus frame is different (First stimulus frame was displayed for 200 ms, followed by a 100 ms interframe blank, then the second stimulus frame appeared for 200 ms, and finally the participants pressed the response keys). (a) Stimulus and object appearance in the one-object condition. (b) Stimulus and object appearance in the two-object condition. (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the one-object condition and the blue bars show correct percentage in the two-object condition. Task performance was better when dividing attention across same versus different colors in the one-object condition (OOSF > OODF) as well in the two-object condition (TOSF > TODF) for all participants. Participant KS performed 75.3% (same feature, SF) versus 53.2% (different features, DF) in the one-object condition (OO), and 74.7% (SF) versus 53.9% (DF) in the two-object condition (TO). Participant WY performed 57.9% (SF) versus 38.5% (DF) for OO and 62.2% versus 37.9% for TO. And the accurate percentage of participant MH was 47.9% (SF) versus 37.9% (DF) for OO, 46.8% (SF) versus 35.8% (DF) for (TO). The extent of the enhancement however exhibited no difference across the one versus two objects conditions. (t tests, **** indicates p < 0.001, *** indicates p < 0.01).
Figure 5
 
Psychophysical stimuli and results in Experiment 2. Experiment 2 compared the feature-based attentional enhancement in one-object condition with that in two-objects condition. The 2-IFC luminance discrimination task is the same as with the fMRI experiment shown in Figure 1, but the timing of the stimulus frame is different (First stimulus frame was displayed for 200 ms, followed by a 100 ms interframe blank, then the second stimulus frame appeared for 200 ms, and finally the participants pressed the response keys). (a) Stimulus and object appearance in the one-object condition. (b) Stimulus and object appearance in the two-object condition. (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the one-object condition and the blue bars show correct percentage in the two-object condition. Task performance was better when dividing attention across same versus different colors in the one-object condition (OOSF > OODF) as well in the two-object condition (TOSF > TODF) for all participants. Participant KS performed 75.3% (same feature, SF) versus 53.2% (different features, DF) in the one-object condition (OO), and 74.7% (SF) versus 53.9% (DF) in the two-object condition (TO). Participant WY performed 57.9% (SF) versus 38.5% (DF) for OO and 62.2% versus 37.9% for TO. And the accurate percentage of participant MH was 47.9% (SF) versus 37.9% (DF) for OO, 46.8% (SF) versus 35.8% (DF) for (TO). The extent of the enhancement however exhibited no difference across the one versus two objects conditions. (t tests, **** indicates p < 0.001, *** indicates p < 0.01).
Experiment 3
Using psychophysics, we carried out a similar study to Experiment 2, but under another two conditions: Either both the attended and the unattended parts of the displays appeared to belong to two identically shaped objects (both embedded in square boxes), or they appeared to belong to two differently shaped objects (one embedded in square box and another embedded in a circle). 
Methods
The methods in this part are identical with Experiment 2, except that the stimuli on both sides are embedded in different shapes of objects. We compared the feature-based attentional enhancement the following conditions: either two same objects (both embedded in square boxes) or two different objects (one embedded in a square box and another embedded in a circle). In each condition, every participant performed a total of 1,440 trials of 2-IFC luminance discrimination task, with the same parameters as in Experiment 2. Three participants participated in the tasks and obtained consistent results. 
Results
In this experiment, we observed feature-based attentional enhancement in both conditions: two same objects and two different objects. Therefore the appearance (here referring to shape) of the objects where the features are embedded had no influence on the existence of feature-based attention. Figure 6 shows how feature-based attention was observed both in the two-same-object (TS) condition and the two-different-object condition (TD) for all the objects (p < 0.01 for each participant, N = 1,440 trials/participant as shown in Figure 6c). 
Figure 6
 
Psychophysical stimuli and result in Experiment 3. This experiment compared the feature-based attentional enhancement in the two-same-object condition with that in the two-different-object condition. The stimuli and tasks are identical with Experiment 2. (a) Stimulus and object appearance in the two-same-object condition (here displayed as two square boxes, same with the two-object condition in Experiments 1 and 2). (b) Stimulus and object appearance in the two-different-object condition (one side displayed as a square box and another side displayed as a circle). (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the two-same-object condition and the blue bars show correct percentage in the two-different-object condition. Task performance was better when dividing attention across same vs. different colors in the two-same-object condition (TS_SF > TS_DF) as well in the two-different-object condition (TD_SF > TD_DF) for all three participants. (TS_SF: participants attended to the same feature in the two-same-object condition; TS_DF: participants attended to a different feature in the two-same-shape condition; TD_SF: participants attended to the same feature in the two-different-object condition; TD_DF: participants attended to a different feature in the two-different-shapes condition). Participant KS: 75.0% (same feature, SF) versus 64.0% (different features, DF) correct percentage for TS and 79.4% (SF) versus 63.5% (DF) for TD; Participant WY, 73.3% (SF) versus 43.2% (DF) correct percentage for TS and 71.5% (SF) versus 36.5% (DF) for TD; Participant MH: 50.8% (SF) versus 35.6% (DF) correct percentage for TS and 51.0% (SF) versus 36.7% (DF) for (TD). (t tests, **** indicates p < 0.001, *** indicates p < 0.01, ** indicates p < 0.05).
Figure 6
 
Psychophysical stimuli and result in Experiment 3. This experiment compared the feature-based attentional enhancement in the two-same-object condition with that in the two-different-object condition. The stimuli and tasks are identical with Experiment 2. (a) Stimulus and object appearance in the two-same-object condition (here displayed as two square boxes, same with the two-object condition in Experiments 1 and 2). (b) Stimulus and object appearance in the two-different-object condition (one side displayed as a square box and another side displayed as a circle). (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the two-same-object condition and the blue bars show correct percentage in the two-different-object condition. Task performance was better when dividing attention across same vs. different colors in the two-same-object condition (TS_SF > TS_DF) as well in the two-different-object condition (TD_SF > TD_DF) for all three participants. (TS_SF: participants attended to the same feature in the two-same-object condition; TS_DF: participants attended to a different feature in the two-same-shape condition; TD_SF: participants attended to the same feature in the two-different-object condition; TD_DF: participants attended to a different feature in the two-different-shapes condition). Participant KS: 75.0% (same feature, SF) versus 64.0% (different features, DF) correct percentage for TS and 79.4% (SF) versus 63.5% (DF) for TD; Participant WY, 73.3% (SF) versus 43.2% (DF) correct percentage for TS and 71.5% (SF) versus 36.5% (DF) for TD; Participant MH: 50.8% (SF) versus 35.6% (DF) correct percentage for TS and 51.0% (SF) versus 36.7% (DF) for (TD). (t tests, **** indicates p < 0.001, *** indicates p < 0.01, ** indicates p < 0.05).
Discussion
Feature-based attention has been widely studied as the ability to enhance the representation of attended features throughout the visual cortex (Maunsell & Treue, 2006). By using both fMRI and psychophysical studies, our new findings not only further confirm that feature-based attention differs from object-based attention (Wegener et al., 2008), but also reveal that whether the feature-based attentional enhancement exists or not is independent of object appearance where features are embedded (such appearance includes: (a) features embedded in one object; (b) features embedded in two identical objects; (c) features embedded in two objects with different shapes). At the neural level, our results suggest that feature-based attentional enhancement could originate at lower processing levels, which do not depend upon the availability of a parsing of the visual input into different object forms. Therefore, feature-based attention seems to employ a distinct and possibly hierarchically earlier mechanism than object-based attention. 
Our study is consistent with previous related studies of feature-based attention. Our study confirms the previous feature-based attentional fMRI response enhancement to an ignored stimulus, when that stimulus shares some of the features with a distant, attended stimulus (Saenz et al., 2002), and our study clarifies that the previous observations were not due to grouping of both stimulus sides into one object. Such enhancement has also been observed as increasing the response of neuronal subpopulations that prefer the attended feature, even when the attended and unattended features are coded in the same visual areas (Liu, Stevens, & Carrasco, 2007). We also confirmed the psychophysical studies that showed that participants' dual-task performance is better when attending to same features compared to different features (Saenz et al., 2003; Sally, Vidnyansky, & Papathomas, 2009). Moreover, our new results are consistent with our previous study, which showed that attention enhances both task-relevant features and task-irrelevant features, but by different gain factors (Lu & Itti, 2005). Our present study supports behavioral evidence that attention can bias at the feature level, as opposed to the object level, as suggested in several psychophysical experimental paradigms. For example, Katzner, Busse, and Treue (2006) showed that the integration of color and motion features of random dots occurred when they appeared across superimposed surfaces, which could not be accounted for by object-based attentional selection. In a visual search task, neurons were found to exhibit enhanced response whenever a preferred stimulus in their receptive field matched the target feature (Bichot, Rossi, & Desimone, 2005). At the neural level, our study supports the Feature-Similarity Gain Model (Treue & Martinez Trujillo, 1999). In this model, attention was proposed to increase the gain of neurons preferring the attended feature, and to decrease the gain of neurons with opposite preference (Maunsell & Treue, 2006). 
Recent studies have revealed distinctions between feature-based attention and object-based attention in different ways. From the aspect of feature-based attention, Wegener et al. (2008) reported that feature-based attention employs a unique process different from object-based attention by suppression of non-relevant features (Wegener et al., 2008), which is confirmed by another study (Taya, Adams, Graf, & Lavie, 2009). On the other hand, Boehler, Schoenfeld, Heinze, and Hopf (2010) demonstrated that object-based attention can select the irrelevant features of the unattended object if such features are shared on the attended object, thus extending the scope of object-based attention (Boehler et al., 2010). The new findings of our study reveal another facet regarding the difference between feature-based attention and object-based attention, in that feature-based attentional enhancement is independent upon the object numbers and appearance (one object, two same objects, and two different objects) where features belong to. Together with Boehler et al.'s study (2010), it seems that both selections of feature-based attention and object-based attention can take place beyond a single object and can be spatially global regardless of object appearance. 
In our fMRI study, interestingly, we observed that in the two-object condition, feature-based attentional enhancement was even larger than in the one-object condition. However, such distinction was not duplicated in our psychophysical experiment. In all three conditions (one object, two same objects and two different objects), the psychophysical enhancement of feature-based attention appeared in similar scales. One possible reason for this may be because in the fMRI experiment we defined our localizer stimulus based on the two-object condition to find the region of interest. Thus the region of interest which we selected might be better tuned to the two-object displays. Since we did not perform retinotopic mapping, the feature-based attentional enhancement (both in one-object condition and two-object condition) might reflect some kind of average effect across the localizer area (possibly both striate and extrastriate areas, from 2455 voxels to 3572 voxels (1 × 1 × 1 mm voxels). Detailed effect for specific areas will need further research by studying the retinotopic mapping. Finally, note that we chose simple shapes to define objects (one large rectangle or two smaller squares), and it remains to be investigated in future research whether other shapes would yield identical results. 
Overall, by studying feature-based attentional enhancement by both fMRI and psychophysics in three conditions (one-object, two-same-object, and two-different-object), we showed that feature-based attention exists in all three conditions. Therefore, our results indicate that the existence of feature-based attention is not dependent on the object appearance where features are embedded, suggesting a different mechanism which may originate at lower processing levels where fully formed objects are not yet available. 
Acknowledgments
The authors thank Dr. J. C. Zhuang for technical support at USC Dana and David Dornsife Cognitive Neuroscience Imaging Center. The paper is supported by NSF (CCF-1317433) and U.S. Army (W81XWH-10-2-0076) in the USA; by 973 Program (No. 2013CB967101, No. 2011CB965102, No. 2010CB945601, No. 2011CB965103), 863 Program (No. 2011AA020109) and International Cooperation Program (No. 2011DFB30010) of the Ministry of Science and Technology of China, Shanghai Science Committee Foundation (13PJ1433200, 11PJ1407800) and the National Natural Science Foundation of China (No. 31171419, No. 81100673). 
Commercial relationships: none. 
Corresponding authors: Laurent Itti; Jianwei Lu. 
Email: itti@usc.edu; jwlu33@gmail.com. 
Address: Tongji University, Shanghai, China. 
References
Andersen S. K. Muller M. M. Hillyard S. A. (2009). Color-selective attention need not be mediated by spatial attention. Journal of Vision, 9 (6): 2, 1–7, http://www.journalofvision.org/content/9/6/2, doi:10.1167/9.6.2. [PubMed] [Article] [PubMed]
Beauchamp M. S. Cox R. W. DeYoe E. A. (1997). Graded effects of spatial and featural attention on human area MT and associated motion processing areas. Journal of Neurophysiology, 78 (1), 516–520. [PubMed]
Bichot N. P. Rossi A. F. Desimone R. (2005). Parallel and serial neural mechanisms for visual search in macaque area V4. Science, 308 (5721), 529–534. [CrossRef] [PubMed]
Blaser E. Pylyshyn Z. W. Holcombe A. O. (2000). Tracking an object through feature space. Nature, 408 (6809), 196–199. [CrossRef] [PubMed]
Boehler C. N. Schoenfeld M. A. Heinze H. J. Hopf J. M. (2010). Object-based selection of irrelevant features is not confined to the attended object. Journal of Cognitive Neuroscience, 23, 2231–2239. [CrossRef] [PubMed]
Brefczynski J. A. DeYoe E. A. (1999). A physiological correlate of the ‘spotlight' of visual attention. Nature Neuroscience, 2 (4), 370–374. [PubMed]
Chawla D. Rees G. Friston K. J. (1999). The physiological basis of attentional modulation in extrastriate visual areas. Nature Neuroscience, 2 (7), 671–676. [CrossRef] [PubMed]
Crick F. (1984). Function of the thalamic reticular complex: The searchlight hypothesis. Proceedings of the National Academy of Sciences, USA, 81 (14), 4586–4590. [CrossRef]
Desimone R. Duncan J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193–222. [CrossRef] [PubMed]
Duncan J. (1984). Selective attention and the organization of visual information. Journal of Experimental Psychology-General, 113 (4), 501–517. [CrossRef] [PubMed]
Duncan J. Nimmo-Smith I. (1996). Objects and attributes in divided attention: Surface and boundary systems. Perception & Psychophysics, 58 (7), 1076–1084. [CrossRef] [PubMed]
Festman Y. Braun J. (2010). Does feature similarity facilitate attentional selection? Attention, Perception, & Psychophysics, 72 (8), 2128–2143. [CrossRef]
Itti L. Koch C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2 (3), 194–203. [CrossRef] [PubMed]
Itti L. Koch C. Braun J. (2000). Revisiting spatial vision: Toward a unifying model. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 17 (11), 1899–1917. [CrossRef]
Katzner S. Busse L. Treue S. (2006). Feature-based attentional integration of color and visual motion. Journal of Vision, 6 (3): 7, 269–284, http://www.journalofvision.org/content/6/3/7, doi:10.1167/6.3.7. [PubMed] [Article] [PubMed]
Katzner S. Busse L. Treue S. (2009). Attention to the color of a moving stimulus modulates motion-signal processing in macaque area MT: Evidence for a unified attentional system. Frontiers in Systems Neuroscience, 3, 12. [CrossRef] [PubMed]
Lee D. K. Itti L. Koch C. Braun J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neuroscience, 2 (4), 375–381. [PubMed]
Liu T. Mance I. (2011). Constant spread of feature-based attention across the visual field. Vision Research, 51 (1), 26–33. [CrossRef] [PubMed]
Liu T. Stevens S. T. Carrasco M. (2007). Comparing the time course and efficacy of spatial and feature-based attention. Vision Research, 47 (1), 108–113. [CrossRef] [PubMed]
Lu J. Itti L. (2005). Perceptual consequences of feature-based attention. Journal of Vision, 5 (7): 2, 622–631, http://www.journalofvision.org/content/5/7/2, doi:10.1167/5.7.2. [PubMed] [Article] [PubMed]
Maunsell J. H. Treue S. (2006). Feature-based attention in visual cortex. Trends in Neurosciences, 29 (6), 317–322. [CrossRef] [PubMed]
McAdams C. J. Maunsell J. H. (2000). Attention to both space and feature modulates neuronal responses in macaque area V4. Journal of Neurophysiology, 83 (3), 1751–1755. [PubMed]
Melcher D. Papathomas T. V. Vidnyanszky Z. (2005). Implicit attentional selection of bound visual features. Neuron, 46 (5), 723–729. [CrossRef] [PubMed]
Mendoza D. Schneiderman M. Kaul C. Martinez-Trujillo J. (2011). Combined effects of feature-based working memory and feature-based attention on the perception of visual motion direction. Journal of Vision, 11 (1): 11, 1–15, http://www.journalofvision.org/content/11/1/11, doi:10.1167/11.1.11. [PubMed] [Article]
Motter B. C. (1994). Neural correlates of attentive selection for color or luminance in extrastriate area V4. Journal of Neuroscience, 14 (4), 2178–2189. [PubMed]
O'Craven K. M. Downing P. E. Kanwisher N. (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401 (6753), 584–587. [PubMed]
Reynolds J. H. Heeger D. J. (2009). The normalization model of attention. Neuron, 61 (2), 168–185. [CrossRef] [PubMed]
Saenz M. Buracas G. T. Boynton G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5 (7), 631–632. [CrossRef] [PubMed]
Saenz M. Buracas G. T. Boynton G. M. (2003). Global feature-based attention for motion and color. Vision Research, 43 (6), 629–637. [CrossRef] [PubMed]
Sally S. L. Vidnyansky Z. Papathomas T. V. (2009). Feature-based attentional modulation increases with stimulus separation in divided-attention tasks. Spatial Vision, 22 (6), 529–553. [CrossRef] [PubMed]
Serences J. T. Boynton G. M. (2007). Feature-based attentional modulations in the absence of direct visual stimulation. Neuron, 55 (2), 301–312. [CrossRef] [PubMed]
Serences J. T. Schwarzbach J. Courtney S. M. Golay X. Yantis S. (2004). Control of object-based attention in human cortex. Cerebral Cortex, 14 (12), 1346–1357. [CrossRef] [PubMed]
Sohn W. Papathomas T. V. Blaser E. Vidnyanszky Z. (2004). Object-based cross-feature attentional modulation from color to motion. Vision Research, 44 (12), 1437–1443. [CrossRef] [PubMed]
Stojanoski B. Niemeier M. (2007). Feature-based attention modulates the perception of object contours. Journal of Vision, 7 (14): 18, 1–11, http://www.journalofvision.org/content/7/14/18, doi:10.1167/7.14.18. [PubMed] [Article] [PubMed]
Taya S. Adams W. J. Graf E. W. Lavie N. (2009). The fate of task-irrelevant visual motion: perceptual load versus feature-based attention. Journal of Vision, 9 (12): 12, 1–10, http://www.journalofvision.org/content/9/12/12, doi:10.1167/9.12.12. [PubMed] [Article] [PubMed]
Treue S. Martinez Trujillo J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399 (6736), 575–579. [CrossRef] [PubMed]
Treue S. Maunsell J. H. (1996). Attentional modulation of visual motion processing in cortical areas MT and MST. Nature, 382 (6591), 539–541. [CrossRef] [PubMed]
Wegener D. Ehn F. Aurich M. K. Galashan F. O. Kreiter A. K. (2008). Feature-based attention and the suppression of non-relevant object features. Vision Research, 48 (27), 2696–2707. [CrossRef] [PubMed]
Figure 1
 
Experimental stimuli and frame sequences in fMRI study. (a) The two-interval force-choice task frame sequences in the one-object condition. Within one TR (time of repetition = 3 s) period, the first stimulus frame was displayed for 350 ms, followed by a 150 ms interframe with same background but no dot stimuli (blank frame). Then the second stimulus frame appeared for 350 ms and finally the blank frame was showed for 2150 ms for the participants to respond whether the first or second stimulus frame had higher luminance for either red dots or green dots at the attended side (left side in the Figure). (b) Closeup of the shadow effect of the stimuli. The shadow effect was made with the same background texture but less luminance (60% of the textured background luminance) to enhance the objectness visualization. (c) The experimental stimuli in the two-object condition. The frame sequences are identical to those in the one-object condition shown in (a). (d) The localizer stimulus. The checker stimuli were made of 3 × 3 square frames in each of which the red and green square flashed at 5 Hz frequency.
Figure 1
 
Experimental stimuli and frame sequences in fMRI study. (a) The two-interval force-choice task frame sequences in the one-object condition. Within one TR (time of repetition = 3 s) period, the first stimulus frame was displayed for 350 ms, followed by a 150 ms interframe with same background but no dot stimuli (blank frame). Then the second stimulus frame appeared for 350 ms and finally the blank frame was showed for 2150 ms for the participants to respond whether the first or second stimulus frame had higher luminance for either red dots or green dots at the attended side (left side in the Figure). (b) Closeup of the shadow effect of the stimuli. The shadow effect was made with the same background texture but less luminance (60% of the textured background luminance) to enhance the objectness visualization. (c) The experimental stimuli in the two-object condition. The frame sequences are identical to those in the one-object condition shown in (a). (d) The localizer stimulus. The checker stimuli were made of 3 × 3 square frames in each of which the red and green square flashed at 5 Hz frequency.
Figure 2
 
The left-bottom panel shows the fMRI response change on the ignored side when participants attended to the same feature and to the different feature, in both the one-object and the two-object conditions, averaged over all blocks for all four participants. The yellow periods are for the one-object condition. The blue periods are for the two-object condition (OOSF: participants attended to the same feature in the one-object condition; OODF: participants attended to a different feature in the one-object condition; TOSF: participants attended to the same feature in the two-object condition; TODF: participants attended to a different feature in the two-object condition). The Figure indicates that in the one-object condition the fMRI BOLD signal is higher on the ignored side when participants attended to same feature (red, OOSF) than when they attended to a different feature (green, OODF). In the two-object condition, the fMRI signal is also higher when participants attended the same feature (red, TOSF) than to a different feature (green, TODF). This Figure also shows sagittal (top-left), coronal (top-right) and transverse (bottom-right) sections of the region of interest (ROI) for one participant. The detailed parameters of the ROI can be referred in the result.
Figure 2
 
The left-bottom panel shows the fMRI response change on the ignored side when participants attended to the same feature and to the different feature, in both the one-object and the two-object conditions, averaged over all blocks for all four participants. The yellow periods are for the one-object condition. The blue periods are for the two-object condition (OOSF: participants attended to the same feature in the one-object condition; OODF: participants attended to a different feature in the one-object condition; TOSF: participants attended to the same feature in the two-object condition; TODF: participants attended to a different feature in the two-object condition). The Figure indicates that in the one-object condition the fMRI BOLD signal is higher on the ignored side when participants attended to same feature (red, OOSF) than when they attended to a different feature (green, OODF). In the two-object condition, the fMRI signal is also higher when participants attended the same feature (red, TOSF) than to a different feature (green, TODF). This Figure also shows sagittal (top-left), coronal (top-right) and transverse (bottom-right) sections of the region of interest (ROI) for one participant. The detailed parameters of the ROI can be referred in the result.
Figure 3
 
Overall averaged fMRI results from all four participants in Experiment 1. The left panel shows the fMRI response (beta weight average) to the ignored stimuli and the right panel shows the fMRI response to the attended stimuli (symbols of OOSF, OODF, TOSF, and TODF are identical with Figure 1 and Figure 2). Results indicate in both the one-object and the two-object conditions the fMRI response to the ignored stimuli is significantly higher when participants attended to the same feature than when they attended to a different feature (left). As a control, the fMRI response exhibits no significant enhancement at the attended stimuli when attending to the same feature versus a different feature (t tests, ** indicates p < 0.008, * indicates p < 0.05, “n.s.” indicates no significant enhancement (p ≥ 0.1).
Figure 3
 
Overall averaged fMRI results from all four participants in Experiment 1. The left panel shows the fMRI response (beta weight average) to the ignored stimuli and the right panel shows the fMRI response to the attended stimuli (symbols of OOSF, OODF, TOSF, and TODF are identical with Figure 1 and Figure 2). Results indicate in both the one-object and the two-object conditions the fMRI response to the ignored stimuli is significantly higher when participants attended to the same feature than when they attended to a different feature (left). As a control, the fMRI response exhibits no significant enhancement at the attended stimuli when attending to the same feature versus a different feature (t tests, ** indicates p < 0.008, * indicates p < 0.05, “n.s.” indicates no significant enhancement (p ≥ 0.1).
Figure 4
 
In Experiment 1, the fMRI signal response to the ignored stimuli shown as beta weight magnitude difference (after fitting the GLM) when participants attended the same feature minus attended a different feature in both the one-object and two-object conditions (OOSF minus OODF, TOSF minus TODF). Identification of OOSF, OODF, TOSF, and TODF is same with Figure 1. For all participants, the fMRI beta weight difference corresponding to the ignored stimuli between same feature and different feature are significantly or borderline higher than zero (two-sample t test: *** indicates p < 0.01, ** indicates p < 0.05, * indicates borderline p < 0.08) in both the one-object and two-object conditions, suggesting that a feature-based attention effect exists in both conditions, regardless whether features appear as one object or two objects.
Figure 4
 
In Experiment 1, the fMRI signal response to the ignored stimuli shown as beta weight magnitude difference (after fitting the GLM) when participants attended the same feature minus attended a different feature in both the one-object and two-object conditions (OOSF minus OODF, TOSF minus TODF). Identification of OOSF, OODF, TOSF, and TODF is same with Figure 1. For all participants, the fMRI beta weight difference corresponding to the ignored stimuli between same feature and different feature are significantly or borderline higher than zero (two-sample t test: *** indicates p < 0.01, ** indicates p < 0.05, * indicates borderline p < 0.08) in both the one-object and two-object conditions, suggesting that a feature-based attention effect exists in both conditions, regardless whether features appear as one object or two objects.
Figure 5
 
Psychophysical stimuli and results in Experiment 2. Experiment 2 compared the feature-based attentional enhancement in one-object condition with that in two-objects condition. The 2-IFC luminance discrimination task is the same as with the fMRI experiment shown in Figure 1, but the timing of the stimulus frame is different (First stimulus frame was displayed for 200 ms, followed by a 100 ms interframe blank, then the second stimulus frame appeared for 200 ms, and finally the participants pressed the response keys). (a) Stimulus and object appearance in the one-object condition. (b) Stimulus and object appearance in the two-object condition. (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the one-object condition and the blue bars show correct percentage in the two-object condition. Task performance was better when dividing attention across same versus different colors in the one-object condition (OOSF > OODF) as well in the two-object condition (TOSF > TODF) for all participants. Participant KS performed 75.3% (same feature, SF) versus 53.2% (different features, DF) in the one-object condition (OO), and 74.7% (SF) versus 53.9% (DF) in the two-object condition (TO). Participant WY performed 57.9% (SF) versus 38.5% (DF) for OO and 62.2% versus 37.9% for TO. And the accurate percentage of participant MH was 47.9% (SF) versus 37.9% (DF) for OO, 46.8% (SF) versus 35.8% (DF) for (TO). The extent of the enhancement however exhibited no difference across the one versus two objects conditions. (t tests, **** indicates p < 0.001, *** indicates p < 0.01).
Figure 5
 
Psychophysical stimuli and results in Experiment 2. Experiment 2 compared the feature-based attentional enhancement in one-object condition with that in two-objects condition. The 2-IFC luminance discrimination task is the same as with the fMRI experiment shown in Figure 1, but the timing of the stimulus frame is different (First stimulus frame was displayed for 200 ms, followed by a 100 ms interframe blank, then the second stimulus frame appeared for 200 ms, and finally the participants pressed the response keys). (a) Stimulus and object appearance in the one-object condition. (b) Stimulus and object appearance in the two-object condition. (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the one-object condition and the blue bars show correct percentage in the two-object condition. Task performance was better when dividing attention across same versus different colors in the one-object condition (OOSF > OODF) as well in the two-object condition (TOSF > TODF) for all participants. Participant KS performed 75.3% (same feature, SF) versus 53.2% (different features, DF) in the one-object condition (OO), and 74.7% (SF) versus 53.9% (DF) in the two-object condition (TO). Participant WY performed 57.9% (SF) versus 38.5% (DF) for OO and 62.2% versus 37.9% for TO. And the accurate percentage of participant MH was 47.9% (SF) versus 37.9% (DF) for OO, 46.8% (SF) versus 35.8% (DF) for (TO). The extent of the enhancement however exhibited no difference across the one versus two objects conditions. (t tests, **** indicates p < 0.001, *** indicates p < 0.01).
Figure 6
 
Psychophysical stimuli and result in Experiment 3. This experiment compared the feature-based attentional enhancement in the two-same-object condition with that in the two-different-object condition. The stimuli and tasks are identical with Experiment 2. (a) Stimulus and object appearance in the two-same-object condition (here displayed as two square boxes, same with the two-object condition in Experiments 1 and 2). (b) Stimulus and object appearance in the two-different-object condition (one side displayed as a square box and another side displayed as a circle). (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the two-same-object condition and the blue bars show correct percentage in the two-different-object condition. Task performance was better when dividing attention across same vs. different colors in the two-same-object condition (TS_SF > TS_DF) as well in the two-different-object condition (TD_SF > TD_DF) for all three participants. (TS_SF: participants attended to the same feature in the two-same-object condition; TS_DF: participants attended to a different feature in the two-same-shape condition; TD_SF: participants attended to the same feature in the two-different-object condition; TD_DF: participants attended to a different feature in the two-different-shapes condition). Participant KS: 75.0% (same feature, SF) versus 64.0% (different features, DF) correct percentage for TS and 79.4% (SF) versus 63.5% (DF) for TD; Participant WY, 73.3% (SF) versus 43.2% (DF) correct percentage for TS and 71.5% (SF) versus 36.5% (DF) for TD; Participant MH: 50.8% (SF) versus 35.6% (DF) correct percentage for TS and 51.0% (SF) versus 36.7% (DF) for (TD). (t tests, **** indicates p < 0.001, *** indicates p < 0.01, ** indicates p < 0.05).
Figure 6
 
Psychophysical stimuli and result in Experiment 3. This experiment compared the feature-based attentional enhancement in the two-same-object condition with that in the two-different-object condition. The stimuli and tasks are identical with Experiment 2. (a) Stimulus and object appearance in the two-same-object condition (here displayed as two square boxes, same with the two-object condition in Experiments 1 and 2). (b) Stimulus and object appearance in the two-different-object condition (one side displayed as a square box and another side displayed as a circle). (c) The correct percentage performance from three participants. Yellow bars represent the correct percentage in the two-same-object condition and the blue bars show correct percentage in the two-different-object condition. Task performance was better when dividing attention across same vs. different colors in the two-same-object condition (TS_SF > TS_DF) as well in the two-different-object condition (TD_SF > TD_DF) for all three participants. (TS_SF: participants attended to the same feature in the two-same-object condition; TS_DF: participants attended to a different feature in the two-same-shape condition; TD_SF: participants attended to the same feature in the two-different-object condition; TD_DF: participants attended to a different feature in the two-different-shapes condition). Participant KS: 75.0% (same feature, SF) versus 64.0% (different features, DF) correct percentage for TS and 79.4% (SF) versus 63.5% (DF) for TD; Participant WY, 73.3% (SF) versus 43.2% (DF) correct percentage for TS and 71.5% (SF) versus 36.5% (DF) for TD; Participant MH: 50.8% (SF) versus 35.6% (DF) correct percentage for TS and 51.0% (SF) versus 36.7% (DF) for (TD). (t tests, **** indicates p < 0.001, *** indicates p < 0.01, ** indicates p < 0.05).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×