Free
Article  |   January 2012
A need for more information uptake but not focused attention to access basic-level representations
Author Affiliations
Journal of Vision January 2012, Vol.12, 15. doi:10.1167/12.1.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Marlene Poncet, Leila Reddy, Michele Fabre-Thorpe; A need for more information uptake but not focused attention to access basic-level representations. Journal of Vision 2012;12(1):15. doi: 10.1167/12.1.15.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

Complex visual scenes can be categorized at the superordinate level (e.g., animal/non-animal or vehicle/non-vehicle) without focused attention. However, rapid visual categorization at the basic level (e.g., dog/non-dog or car/non-car) requires additional processing time. Such finer categorization might, thus, require attentional resources. This hypothesis was tested in the current study with a dual-task paradigm in which subjects performed a basic-level categorization task in peripheral vision either alone (single-task condition) or concurrently with an attentionally demanding letter discrimination task (dual-task condition). Our results indicate that basic-level categorization of either biological (dog/non-dog animal) or man-made (car/non-car vehicle) stimuli requires more information uptake but can, nevertheless, be performed when attention is not fully available, presumably because it is supported by hardwired, specialized neuronal networks.

Introduction
We are constantly bombarded by visual information from our environment. Attending to pertinent stimuli enables us to reduce the amount of information to be processed and, consequently, makes visual processing more efficient. However, visual attention has a limited capacity and only a restricted number of stimuli can benefit from attention at a time. What information do we have about objects located outside the attentional focus when attentional resources are directed toward another specific object? Previous studies that explored this question showed that humans are able to perform simple tasks, such as orientation and color discrimination, in the near absence of attention (Braun, 1994; Braun & Julesz, 1998; Braun & Sagi, 1990; Julesz & Schumer, 1981; Treisman & Gelade, 1980). On the contrary, the discrimination of motion or of slightly more complex stimuli composed of conjunctions of simple features, such as randomly oriented Ls and Ts, or spatial arrangements of two colors, cannot be performed when attention is engaged elsewhere (Lee, Koch, & Braun, 1999). 
In contrast to these more artificial types of stimuli, the processing of natural stimuli (e.g., scenes and faces) is remarkably efficient even in the absence of attention. Response latencies at the behavioral and electrophysiological levels show that humans are equally fast at detecting animals when two scenes are presented simultaneously as they are for just one (Rousselet, Fabre-Thorpe, & Thorpe, 2002). Moreover, reporting if a scene contains an animal or a vehicle (Li, VanRullen, Koch, & Perona, 2002) or whether a face is feminine or masculine (Reddy, Wilken, & Koch, 2004) can be done with little drop in behavioral accuracy even when focused attention is engaged elsewhere. To account for these results, it has been proposed that the categorization of frequently encountered stimuli, such as natural scenes and faces, could rely on selective populations of neurons that have developed over time and that could be activated even in the absence of attention. In contrast, the processing of rarely encountered artificial stimuli would be dependent on the availability of attentional resources (Fei-Fei, VanRullen, Koch, & Perona, 2005; VanRullen, 2009; VanRullen, Reddy, & Koch, 2004). The automatic visual processing of familiar stimuli could, thus, be an alternative mechanism of the brain to effortlessly navigate the ocean of visual information. 
The studies mentioned above have demonstrated that natural scene categorization at the superordinate level (e.g., animal/non-animal or vehicle/non-vehicle) is feasible in the near absence of focal attention. However, it is generally recognized that object categorization—grouping different objects into one category and discriminating them from items from another category—can occur at different levels of specificity or detail (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Rosch et al. proposed a distinction between three category levels: superordinate level (e.g., discriminating an animal from a vehicle), basic level (e.g., discriminating a dog from another animal), and subordinate level (e.g., discriminating between a Dalmatian and another dog). Although the prevalent consensus supports the idea of a basic-level advantage (Murphy & Brownell, 1985), recent studies have suggested that access to basic visual categories takes longer processing time compared to superordinate visual categories and that this might occur at different stages of processing. First, more incoming information is required for basic than for superordinate level categorization. Indeed, global properties of an image are available with less presentation time than basic-level details (Fei-Fei, Iyer, Koch, & Perona, 2007; Greene & Oliva, 2009). Second, access to the basic-level category requires additional processing time. Reaction times are longer for basic than for superordinate-level categorization both for objects (Mace, Joubert, Nespoulous, & Fabre-Thorpe, 2009) and scenes (Joubert, Rousselet, Fize, & Fabre-Thorpe, 2007). These behavioral measures are consistent with electrophysiological studies reporting that global information is transmitted earlier than finer grained information (Sugase, Yamane, Ueno, & Kawano, 1999). The delay in accessing more detailed information at the basic level could reflect additional processing of stimulus information, perhaps as a result of feedback signals and/or attentional processing (Ahissar, Nahum, Nelken, & Hochstein, 2009; Cromer, Roy, & Miller, 2010; Freedman & Miller, 2008; Freedman, Riesenhuber, Poggio, & Miller, 2003; Meyers, Freedman, Kreiman, Miller, & Poggio, 2008). 
The purpose of the present study was to understand the nature of the additional processing required for basic-level categorization and, more specifically, to determine if the longer processing time could be explained—at least partly—by a necessary allocation of attentional resources. To determine the attentional requirements of basic-level categorization, we used a dual-task paradigm with two types of stimuli often encountered in our everyday life. In the first experiment, participants were tested with a natural category (i.e., a dog/non-dog categorization task), while in the second experiment they were tested with a man-made category (i.e., a car/non-car categorization task). Participants performed each basic-level categorization task either alone, when attention was fully available (single-task condition), or concurrently with an attentionally demanding task that captured most of (if not all) the available attentional resources (dual-task condition). Categorization performance was then compared in the single- and dual-task conditions. If the basic-level categorization task does not require attention, then performance on this task should be very similar in the single- and dual-task conditions. On the other hand, performance in the dual-task condition should be considerably impaired if attention is necessary for successful categorization at the basic level (Braun & Julesz, 1998; Braun & Sagi, 1990; Fei-Fei et al., 2005; Lee et al., 1999; Li et al., 2002; Reddy et al., 2004). Finally, a third experiment tested the amount of information participants were able to glean from the stimuli depending on its presentation time. 
Methods
Participants
Six participants were tested in Experiment I. Four of these and two additional participants were tested in Experiment II. In Experiment III, five participants tested in Experiment I were again tested with biological stimuli, and five tested in Experiment II were tested with man-made stimuli. One author (MP) participated in all three experiments; another author (LR) took part in Experiments I and III. All participants (5 women, 3 men, 1 left-handed, 7 right-handed, between 22 and 35 years of age) had normal or corrected-to-normal acuity and provided written informed consent. 
Image database
The images used in the current experiments were chosen from the image sets used by previous studies (Joubert, Fize, Rousselet, & Fabre-Thorpe, 2008; Mace et al., 2009; Reddy et al., 2004), from the LabelMe database (Russell, Torralba, Murphy, & Freeman, 2008), and from the Internet. Each image, subtending 5° × 5° of visual angle, was presented in the periphery. They were all grayscale converted to avoid potential facilitation effects of color on recognition (Gegenfurtner & Rieger, 2000; Rossion & Pourtois, 2004; Yip & Sinha, 2002). The images used for the training and testing phases were not the same and each image was viewed at most 6 times by a subject during the entire experiment (3 times in the superordinate-level categorization task and 3 times in the basic-level categorization task). Participants categorized natural scenes that contained one or several visual objects belonging to one of four categories: animals, vehicles, dogs, and cars (for examples, see Figure 1). The animal category (non-dog animals, n = 1384) contained mammals, birds, fish, reptiles, and insects; the vehicle category (non-car vehicles, n = 1178) included trucks, boats, airplanes, motorbikes, bicycles, balloons, etc.; the dog category (n = 1078) included dogs of various breeds: Dalmatians, Poodles, Spaniels, Bulldogs, etc.; and, the car category (n = 830) included all kinds of cars: racing cars, sport cars, vintage cars, modern cars, and moving, immobile, and damaged cars. The objects present in the images were of various orientations, positions, and sizes. The context surrounding them could be natural or urban, indoor or outdoor. 
Figure 1
 
Examples of natural scene images used in the study. In Experiment I (top row), the objects contained in the images were biologically pertinent and could be categorized as (A) animal at the superordinate level or (B) dog at the basic level. In Experiment II (bottom row), stimuli were man-made objects categorized as (C) vehicle at the superordinate level or (D) car at the basic level.
Figure 1
 
Examples of natural scene images used in the study. In Experiment I (top row), the objects contained in the images were biologically pertinent and could be categorized as (A) animal at the superordinate level or (B) dog at the basic level. In Experiment II (bottom row), stimuli were man-made objects categorized as (C) vehicle at the superordinate level or (D) car at the basic level.
Apparatus
Participants were seated in a dimly lit room, approximately 75 cm from a computer screen (1024 × 768 pixels, refresh rate: 100 Hz). The stimulus display was synchronized with the refresh rate of the monitor. 
Procedure
We used a dual-task paradigm. Participants performed two tasks: a central attentionally demanding task and a peripheral task for which attentional requirements were tested. These two tasks were performed either alone, when attention was fully available (single-task condition), or simultaneously (dual-task condition) with subjects instructed to prioritize the central task and perform the peripheral task as well as they could. If the peripheral task does not require attention, then performance should be very similar in the single- and dual-task conditions. On the other hand, if it requires attentional resources, then performance in the dual-task condition should be considerably impaired. 
Dual-task paradigm
The experimental timeline for one trial is illustrated in Figure 2. The stimulus display was the same for all conditions and only the instructions differed for each condition. Each trial started with a fixation cross, followed by the presentation of the central attentionally demanding task (letter discrimination task, see below for more details). A natural scene was presented 16 ms later in the periphery. The stimulus durations for the central and peripheral stimuli were determined individually for each subject (see below for the procedure). The central and peripheral stimuli were masked with the constraint that the peripheral stimulus was always masked before the central stimulus. In this paper, the term “SOA” refers to the time between the onset of the stimulus and the onset of its mask. Thus, the central SOA was always longer than the peripheral SOA. At the end of the trial, subjects made behavioral reports on the central and/or peripheral tasks depending on the task condition (single or dual task). Participants were asked to be as accurate as possible and no constraint was placed on their reaction times. An auditory tone was provided as feedback for incorrect answers for each task in all conditions. 
Figure 2
 
Schematic timeline for one trial in the dual-task paradigm. At the end of a trial, participants were asked to perform the central letter discrimination task (whether all letters were the same or one was different from the others) and/or the peripheral task (e.g., whether a dog was present or not). The display was the same for all conditions and only the instructions differed. The letters and the peripheral stimulus were masked individually. The central SOA and the peripheral SOA indicate the presentation time for the letters (∼215 ms) and for the dog/non-dog task (∼180 ms), respectively. They were adjusted for each subject and each task.
Figure 2
 
Schematic timeline for one trial in the dual-task paradigm. At the end of a trial, participants were asked to perform the central letter discrimination task (whether all letters were the same or one was different from the others) and/or the peripheral task (e.g., whether a dog was present or not). The display was the same for all conditions and only the instructions differed. The letters and the peripheral stimulus were masked individually. The central SOA and the peripheral SOA indicate the presentation time for the letters (∼215 ms) and for the dog/non-dog task (∼180 ms), respectively. They were adjusted for each subject and each task.
The central attentionally demanding task: A letter discrimination task
Each trial started with a fixation cross appearing at the center of a black screen for a random duration between 300 and 400 ms before the first stimulus appeared. At 0 ms, five letters, randomly oriented Ls and Ts, were presented in the center of the screen at 5 of 9 possible locations within a radius of 1.2° from the center. All five letters could be the same (5 Ts or 5 Ls) or one of them could be different from the other four (1 L among 4 Ts or 1 T among 4 Ls). Participants were asked to report whether all letters were the same or not by pressing one of two keys on the keyboard with their left hand. Each letter was individually masked by an F, rotated by an angle corresponding to the T or L it replaced. The central SOA was determined independently for each subject (see experimental procedure Step 1 below) and was the same in both single- and dual-task conditions. 
Peripheral task
The peripheral stimulus (natural scene or disk), subtending 5° × 5° of visual angle, was presented 16 ms after the onset of the central stimulus at a point on the edge of an imaginary rectangle 12° × 10° in size and masked at the end of its presentation duration. It was always masked before the central stimulus was masked. Participants had to perform 3 different peripheral tasks in each experimental session: a categorization at the basic level (task A) and two control tasks: a categorization at the superordinate level previously reported not to require focal attention (Task B) and a control disk discrimination task known to be attentionally demanding (Task C; Braun & Julesz, 1998; Fei-Fei et al., 2005; Li et al., 2002; Reddy et al., 2004; VanRullen et al., 2004). Each peripheral task was performed twice in the single-task condition and 3 times in the dual-task condition in separate blocks. Participants reported their answer by pressing one of two keys on the keyboard with their right hand. 
A peripheral face–gender discrimination task was used during dual-task training (see Step 1 in Table 1). 
Table 1
 
Procedure followed by each subject to determine the SOAs required in each task. SOAs were adjusted after each block as follows: They were decreased by 10 ms when participants' performance in the single-task condition was above 85% and increased by 10 ms when it fell below 75%. Note that the peripheral SOA determined in Step 1 is not used subsequently as it refers to a categorization task (female/male faces) that we used to train subjects on the dual-task paradigm and to determine central SOAs. Peripheral SOAs relevant for the present experiments are individually determined in Step 2.
Table 1
 
Procedure followed by each subject to determine the SOAs required in each task. SOAs were adjusted after each block as follows: They were decreased by 10 ms when participants' performance in the single-task condition was above 85% and increased by 10 ms when it fell below 75%. Note that the peripheral SOA determined in Step 1 is not used subsequently as it refers to a categorization task (female/male faces) that we used to train subjects on the dual-task paradigm and to determine central SOAs. Peripheral SOAs relevant for the present experiments are individually determined in Step 2.
Dual task: Experimental procedure for training and testing
In separate blocks of the experiment, participants performed each peripheral task (A, B, C) simultaneously with the central letter task. They were asked to fixate the center of the screen and give priority to the central task such that performance on this task would not be significantly different in the single- and the dual-task conditions. Participants were also required to perform the peripheral task as well as possible. 
Performing at a stable level in this dual-task paradigm needs a lot of training. Moreover, the dual-task paradigm is valid only if the SOAs on each task are well defined. Indeed, if the stimulus presentation times are long enough, participants will have ample time to shift their attention from one task to the other in the dual-task condition. Therefore, for each participant and each of the central and peripheral tasks, we determined the SOA at which performance in the single-task condition would be between 75 and 85%. These SOAs were determined in 2 separate training steps (Steps 1 and 2 below) before the final test sessions (Step 3 below). Thus, the experiment consisted of 3 stages (see Table 1). 
Step 1: Dual-task training and determining individual central SOA
The first step of the experiment was a training period of approximately 7 h that allowed participants to familiarize themselves with the dual-task paradigm and, more importantly, that allowed the SOAs on the central task to stabilize. Both goals were achieved concurrently by mixing single-task blocks (central or peripheral) with dual-task blocks. A session was composed of 4 blocks of the central letter task in the single-task condition, 4 blocks of a peripheral task in the single-task condition, and 6 blocks of the dual-task condition. During the experiment, the central and peripheral SOAs were decreased by 10 ms each time the mean performance, in a 48-trial block, was above 90%. At the beginning of the training, the letters were displayed for 500 ms (Reddy et al., 2004). To avoid overlearning the main task and stimuli (animals and vehicles), participants performed a face–gender discrimination task in the periphery during this initial training phase (the peripheral SOAs for the natural scenes were determined in Step 2 (see below)). The starting face SOA was 250 ms (Reddy et al., 2004). This stage was completed when the SOAs on the letter task were stable for three consecutive sessions. In the present experiment, the central SOA had to be longer than the peripheral SOA in order to prevent subjects from shifting their attention to the peripheral task after completing the central task (as mentioned above, participants were instructed to prioritize the central task). One subject (GV) performed extremely well on the central letter task resulting in a shorter central SOA than peripheral SOA. Hence, for this subject, we increased the difficulty of the central task by presenting 9 letters instead of 5 so that the central SOA for this participant was longer than the peripheral one. After this training stage, the central SOAs determined individually for each participant varied from 170 for subject GV up to 260 ms for subject GBJ (mean ≈ 215 ms). These central SOAs were then used in Steps 2 and 3 of the experiment (see below) regardless of the peripheral task. 
Step 2: Determining individual peripheral SOAs
In this phase of training, the SOAs for the peripheral tasks of interest were determined for each task and each participant. In particular, as described above, participants performed 3 different peripheral tasks: a basic-level categorization (Task A), a superordinate-level categorization (Task B), and a color pattern discrimination task (Task C, red half on the left or right) known to be attentionally demanding. In this phase, as well as in the data collection phase (Step 3), a session consisted of 19 blocks of 48 trials each: 4 blocks of the central letter task in the single-task condition, 6 blocks of the peripheral tasks in the single-task condition (2 Task A, 2 Task B, 2 Task C), and 9 blocks of the dual-task condition (3 of each task). Block order was randomized. During training Step 2, based on a previous experiment (Li et al., 2002) and pilot data, the starting SOA was set at 170 ms for the basic-level categorization task and at 100 ms for the superordinate-level categorization and color pattern discrimination tasks. SOAs were then decreased by 10 ms when participants' performance in the single-task condition was above 85% and increased by 10 ms when it fell below 75%. The participants went through to the next stage when their SOAs were stable for three consecutive sessions. On average, training Step 2 took 4 h. In order to avoid any bias related to image learning, the stimuli seen by a participant during this step were not used in the final testing session. 
Step 3: Final testing session
Once training was complete and SOAs defined for each participant and each task (central task and peripheral tasks A, B, and C), data were collected for 10 sessions in Step 3. As described above, a session consisted of 19 blocks of 48 trials each. A session was considered valid if performance on the letter discrimination task was not significantly lower in the dual-task condition than in the single-task condition (paired t-test, n.s.). This ensured that participants focused their attention on the central letter task. Over all sessions and participants, only 4 sessions (3 in Experiment I and 1 in Experiment II) out of 102 were rejected by this criterion. 
Data analysis
Once normality of data was verified with a Shapiro–Wilcoxon test, a 2-way repeated measures ANOVA and paired t-tests were performed to compare single- and dual-task performance, for the superordinate- and basic-level categorization tasks. To summarize and compare participants' performance across different stimulus types, results in the dual-task condition were normalized with respect to the corresponding single-task performance. For each subject, mean performance in the single-task condition is taken as 100% and chance as 50%. Thus, in the dual-task condition, 
N o r m a l i z e d p e r f o r m a n c e = 1 / 2 + 1 / 2 [ ( P 2 1 / 2 ) / ( P 1 1 / 2 ) ] ,
(1)
where P2 and P1 refer to the mean performance in the dual- and single-task conditions, respectively. 
Experiment I: Basic-level categorization of natural stimuli
As described above, attentional requirements were investigated in the performance of three peripheral tasks, all performed in the same experimental session. Participants performed a basic dog/non-dog animal categorization task (Task A) and a superordinate-level categorization task (Task B: animal/non-animal) in order to compare our results with previous ones (Fei-Fei et al., 2005; Li et al., 2002). Furthermore, to verify the efficacy of our protocol, they performed a color pattern discrimination task (Task C), known to be attentionally demanding. Over a period of five days (two 1/2 h sessions per day), participants performed 10 experimental sessions in Experiment I, in each of which they were tested on all 3 peripheral tasks (A, B, and C). In each of these sessions, they performed a total of 19 blocks of 48 trials: 4 single central task blocks, 6 single peripheral task blocks (2 Task A, 2 Task B, and 2 Task C), and 9 dual-task blocks (3 of each task). Conditions were random and instructions about the condition (single- or dual-task condition) and the peripheral task (Tasks A, B, and C) were given at the beginning of the block. 
Task A: Basic-level categorization of natural scenes
Task A was basic-level categorization of natural images. Subjects were asked to report whether the scene contained a dog or another animal. Half the images included at least one dog, and the other half included other animals. A random half of the dog images used in this task was also used as animal images in the superordinate task (Task B). This meant that for this subset of images subjects performed categorizations at both the superordinate and basic levels in separate blocks. The presentation of the peripheral stimulus was followed by a mask (8 different masks were used, composed of a mixture of white noise at different spatial frequencies on which a gray texture was superimposed). Peripheral SOAs for the dog/non-dog task determined for each subject separately (described in experimental procedure Step 2) ranged from 140 for subject GV to 220 ms for subject GBJ (see Table 2). 
Table 2
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment I.
Table 2
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment I.
Central SOA (ms) Peripheral SOAs (ms)
Letters Dog Animal Disk
MP 220 170 90 70
LR 200 170 100 50
GV 170 140 60 60
GBJ 260 220 110 90
LD 250 205 80 60
RV 200 170 70 70
Task B: Superordinate-level categorization of natural scenes
For Task B, participants performed superordinate-level categorization of natural images and reported whether a scene contained an animal or not. In this experiment, half of the natural scenes contained one or more animals and the other half one or more vehicles. Among the animal stimuli, half the images contained dogs, and the other half had other animals. The same 8 different masks used for the basic-level categorization followed the presentation of the peripheral stimulus. Individual peripheral SOAs (experimental procedure Step 2) for the animal/non-animal task varied from 60 ms for subject GV to 110 ms for subject GBJ (see Table 2). 
Task C: Color pattern discrimination
In different blocks, participants also performed a peripheral task known to be attentionally demanding in order to establish that the dual-task paradigm efficiently withdraws attentional resources from the periphery. For this task, the peripheral stimulus was a disk vertically divided with two colors, red and green, in each half. Participants reported whether the red half of the disk was on the left or right side. The presentation of the disk was followed by a mask (6 different masks composed of irregular patches of red and green were used). SOAs for this task were determined individually for each subject (experimental procedure Step 2) and ranged from 50 ms for subject LR to 90 ms for subject GBJ (see Table 2). 
Experiment II: Basic-level categorization of man-made stimuli
This experiment was similar to Experiment I except that for the peripheral task subjects performed basic- and superordinate-level categorizations of man-made stimuli (Task A: car/non-car and Task B: vehicle/non-vehicle). They also performed the disk discrimination task as a control (Task C). Out of the 6 participants, 4 had performed Experiment I. At the end of training (Steps 1 and 2), the peripheral SOAs varied from 75 ms for subject LD to 110 ms for subject RC for the superordinate-level categorization task and from 105 ms for subject GV to 175 ms for subject LD for the basic-level categorization task (see Table 3). Participants performed at least 7 test sessions for the final data collection stage. 
Table 3
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment II.
Table 3
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment II.
Central SOA (ms) Peripheral SOAs (ms)
Letters Car Vehicle Disk
MP 220 160 70 70
GV 170 105 75 60
GBJ 260 145 90 90
LD 250 175 75 60
RS 200 140 100 120
RC 190 130 110 80
Experiment III: Effect of stimulus presentation time in the peripheral task
Peripheral SOAs were always longer for basic than for superordinate categorization. In a third experiment, we determined whether the information available to the subjects when successfully categorizing animals and vehicles at the superordinate level in Experiments I and II could also lead to successful categorization at the basic level. In other words, when subjects can successfully determine whether an image contains an animal or not, can they actually report what kind of animal they have detected? To address this question, in Experiment III, subjects performed a basic-level categorization but with stimulus durations that were limited to those obtained in the superordinate-level task in earlier experiments. Five subjects from Experiment I were tested on biologically relevant stimuli (animal/non-animal and dog/non-dog) and 5 subjects from Experiment II were tested on man-made stimuli (vehicle/non-vehicle and car/non-car). Note that the same experimental design as in Experiments I and II was used (both central and peripheral stimuli were presented), but participants performed only the peripheral categorization tasks in the single-task condition, when attentional resources were fully available (Rousselet, Thorpe, & Fabre-Thorpe, 2004). 
Results
In this study, we used a dual-task paradigm to determine the attentional requirements of various categorization tasks. Participants were required to perform a central attentionally demanding task (letter discrimination) and a peripheral categorization task. Experiment I tested biologically relevant objects (i.e., categorization of animals) in the periphery and Experiment II tested man-made objects (i.e., categorization of vehicles). The central and the peripheral tasks were either performed separately (single-task condition) or simultaneously (dual-task condition). In the dual-task condition, subjects were instructed to prioritize the central letter task such that performance on this task in the single- and dual-task conditions would be equivalent. Indeed, if this criterion was not satisfied in each session, the corresponding session was rejected from all subsequent analyses. The role of attention was measured by comparing participants' performance for the categorization task in the single-task condition (when attentional resources were available) and in the dual-task condition (when spatial attention was focused on the central task). 
Experiment I: Basic-level categorization of natural stimuli
In the first experiment, we tested whether biologically relevant objects could be categorized at the basic level while attention is engaged on a demanding central letter discrimination task. In particular, we compared the performance of participants when performing a dog/non-dog discrimination task either alone or in the dual-task condition. Single-task performance in the dog/non-dog animal discrimination task was 74.7% ± 2.0% and performance on the same task in the dual-task condition was 71.4 ± 2.0% (Figure 3A1). These performance values were not significantly different from each other (paired t-test, p = 0.35). Moreover, individual t-tests for each participant showed that there was no significant difference between the single- and dual-task performance for five of the six participants (paired t-test, p > 0.05). This applies for all participants after correction for multiple comparisons (Bonferroni method). To summarize these results, we calculated the normalized performance on the basic-level categorization for each subject in the dual-task condition as a function of their performance in the single-task condition (Figure 3A2, see Methods section). Normalized performance for the group of participants in the dual-task condition was above 85% of the performance in the single-task condition. These results indicated that even though there was a slight decrement in accuracy when attention was not fully available, performance values were still remarkably high and basic-level categorization of natural stimuli could be performed well with little or no attentional resources. 
Figure 3
 
Results of six participants in the dual-task paradigm for biological and artificial stimuli in the periphery. (1) Individual results. The horizontal axis represents performance on the central attentionally demanding letter discrimination task. The vertical axis represents accuracy (A) on the peripheral basic-level categorization task (dog/non-dog animal), (B) on the superordinate-level categorization task (animal/non-animal), and (C) on the color pattern discrimination task. Participants' mean performance is represented by a blue circle in the single-task condition (single central task and single peripheral task) and a red circle in the dual-task condition. Each black point represents participants' performance for a 48-trial block in the dual-task condition. For plotting purposes, we assume that in the single-task condition performance on the other task was at chance (50%). The error bars represent the SEM. For all participants, performance in the dual-task condition was not significantly different from performance in the single-task condition (paired t-test, p > 0.05) except GV in the dog/non-dog animal task (A1) and except GBJ in the animal/non-animal task (B1). On the contrary, performance of all participants in the color pattern discrimination task (C1) was dramatically impaired in the dual-task condition compared to the single-task condition (paired t-test, p < 10−5). (2) Normalized results. Each circle represents the mean of one participant's performance in the dual-task condition, normalized by his/her performance in the single-task condition. Normalized values are obtained by a linear scaling that maps the average single-task performance to 100%, leaving chance at 50% (see Methods section). These results demonstrate that participants cannot perform an attentionally demanding task when attentional resources are removed from the periphery (C2), but discriminating biologically relevant stimuli at the superordinate (B2) and basic levels (A2) is robust even in the near absence of attention.
Figure 3
 
Results of six participants in the dual-task paradigm for biological and artificial stimuli in the periphery. (1) Individual results. The horizontal axis represents performance on the central attentionally demanding letter discrimination task. The vertical axis represents accuracy (A) on the peripheral basic-level categorization task (dog/non-dog animal), (B) on the superordinate-level categorization task (animal/non-animal), and (C) on the color pattern discrimination task. Participants' mean performance is represented by a blue circle in the single-task condition (single central task and single peripheral task) and a red circle in the dual-task condition. Each black point represents participants' performance for a 48-trial block in the dual-task condition. For plotting purposes, we assume that in the single-task condition performance on the other task was at chance (50%). The error bars represent the SEM. For all participants, performance in the dual-task condition was not significantly different from performance in the single-task condition (paired t-test, p > 0.05) except GV in the dog/non-dog animal task (A1) and except GBJ in the animal/non-animal task (B1). On the contrary, performance of all participants in the color pattern discrimination task (C1) was dramatically impaired in the dual-task condition compared to the single-task condition (paired t-test, p < 10−5). (2) Normalized results. Each circle represents the mean of one participant's performance in the dual-task condition, normalized by his/her performance in the single-task condition. Normalized values are obtained by a linear scaling that maps the average single-task performance to 100%, leaving chance at 50% (see Methods section). These results demonstrate that participants cannot perform an attentionally demanding task when attentional resources are removed from the periphery (C2), but discriminating biologically relevant stimuli at the superordinate (B2) and basic levels (A2) is robust even in the near absence of attention.
Previous studies have shown that natural scene categorization at the superordinate level (i.e., animal/vehicle) can be performed outside the focus of attention (Li et al., 2002; VanRullen et al., 2004). To compare the performance of our subjects with these previous reports, we also tested them on a similar animal/non-animal task (Figure 3B1). Normalizing these results as described above indicated that participants were able to perform the superordinate-level categorization task in the dual-task condition at above 90% of their performance in the single-task condition (Figure 3B2). This result confirms previous findings that the animal/non-animal categorization task can be performed in the near absence of attention. 
We then compared behavioral performance between the two levels of categorization (superordinate and basic) and the two attentional conditions (single and dual), with a 2 × 2 repeated measures ANOVA. Performance was slightly impaired in the dual-task condition compared to the single-task condition (F(1,5) = 7.28, p = 0.04), and performance in the superordinate-level categorization task was superior to that in the basic-level categorization task (F(1,5) = 9.76, p = 0.03) in both attentional conditions. The interaction between level of categorization and attentional condition was not significant (F(1,5) = 0.7, p = 0.4), indicating that withdrawing attention similarly affects both levels of categorization. These results demonstrate that categorization of natural stimuli at the basic level can be performed as efficiently as at the superordinate level, when attention is focused on a central attentionally demanding task. 
The interpretation of these results is based on the supposition that the central letter discrimination task is effective at engaging participants' attentional resources at the center of the screen, implying that performance on attentionally demanding tasks in the periphery should suffer substantially in the dual-task condition. To verify that this was indeed the case, we included a control condition in which subjects were tested in the periphery on a color pattern discrimination task known to be attentionally demanding (Li et al., 2002; VanRullen et al., 2004). Specifically, participants were asked to discriminate between two bisected colored disks as shown in Figure 3C. Participants received the same amount of training on this task as for the superordinate and basic levels of categorization. Contrary to the results obtained for natural object categorization, a dramatic decrease in performance was observed for the six participants in the disk discrimination task when it was performed in the dual-task condition (50.0 ± 2.3%) compared to when it was performed in the single-task condition (80.4 ± 4.0%; paired t-test, p < 10−4; Figure 3C1). Indeed, performance in the dual-task condition was not significantly different from chance level (paired t-test, p = 0.8). Normalized performance in the dual-task condition for this task was between 44% and 56% of the level of performance obtained when the task was performed alone (Figure 3C2). These results confirm that the attentional requirements of the central letter task lead to a clear decrease in performance in the dual-task condition when the peripheral task also requires attention. In contrast, performance on the basic and superordinate level categorization tasks was far from being at chance (paired t-test, p < 10−4) and significantly different from performance obtained on the disk discrimination task in the dual-task condition (paired t-test, p < 10−4). 
Experiment II: Basic-level categorization of man-made stimuli
It is possible that the results obtained in Experiment I could be explained by evolved neural networks that specifically process biologically pertinent stimuli (New, Cosmides, & Tooby, 2007). Thus, in a second experiment, we tested if other types of stimuli that appeared relatively recently in our environment, and are necessary learned during our lifetime, could also be categorized at the basic level without a need for attentional resources. Specifically, we tested the attentional requirements of a car/non-car vehicle discrimination task using the same dual-task paradigm as in Experiment I. 
Mean performance on the basic-level categorization task (car/non-car) was 82.1 ± 2.8% in the single-task condition and 78.1 ± 3.4% in the dual-task condition (Figure 4A1). Even though performance was slightly lower in the dual-task condition compared to the single-task condition, it was still well above chance (paired t-test, p < 10−5). On the contrary, when these participants were tested on the disk discrimination task used in Experiment I, their performance in the dual-task condition was not different from chance (paired t-test, p = 0.7) and well below performance obtained on the car/non-car task in the dual-task condition (paired t-test, p < 10−4). These results show that participants were able to categorize man-made stimuli efficiently even when their attention was engaged by an attentionally demanding task. Moreover, normalized performance of each participant for the car/non-car categorization in the dual-task condition was above 90% (Figure 4A2). Thus, basic-level categorization of man-made stimuli is also possible when attentional resources are not fully available. 
Figure 4
 
Results of six participants in the dual-task paradigm for man-made object categorization tasks in the periphery. Legend as in Figure 3. Performance in the dual-task paradigm was tested for peripheral categorization at the (A) basic (car/non-car vehicle) and (B) superordinate levels (vehicle/non-vehicle). There was no significant difference in performance between the single-task condition and the dual-task condition (paired t-test, p > 0.05) on the basic-level categorization task for MP and GBJ (A1) and for MP, GV, and RC on the superordinate-level categorization (vehicle/non-vehicle) task (B1). The normalized dual-task performance for the basic-level categorization task (A2) and for the superordinate-level categorization (B2) was above 85% of the performance in the single-task condition. However, when tested in the same paradigm in a color pattern discrimination task, participants' performance was at chance level (not shown here). This suggests that although performance was slightly lower in the dual-task condition compared to the single-task condition, participants could perform man-made stimulus categorization tasks at the basic and superordinate levels in the near absence of attention.
Figure 4
 
Results of six participants in the dual-task paradigm for man-made object categorization tasks in the periphery. Legend as in Figure 3. Performance in the dual-task paradigm was tested for peripheral categorization at the (A) basic (car/non-car vehicle) and (B) superordinate levels (vehicle/non-vehicle). There was no significant difference in performance between the single-task condition and the dual-task condition (paired t-test, p > 0.05) on the basic-level categorization task for MP and GBJ (A1) and for MP, GV, and RC on the superordinate-level categorization (vehicle/non-vehicle) task (B1). The normalized dual-task performance for the basic-level categorization task (A2) and for the superordinate-level categorization (B2) was above 85% of the performance in the single-task condition. However, when tested in the same paradigm in a color pattern discrimination task, participants' performance was at chance level (not shown here). This suggests that although performance was slightly lower in the dual-task condition compared to the single-task condition, participants could perform man-made stimulus categorization tasks at the basic and superordinate levels in the near absence of attention.
Again, to compare the performance of our subjects with previous studies reporting that superordinate-level categorization tasks of man-made stimuli can be performed in the near absence of attention (Fei-Fei et al., 2005; Li et al., 2002), we also tested our participants on a vehicle/non-vehicle task (Figure 4B1). Each participant's normalized performance in the dual-task condition was above 85% of their performance in the single-task condition (Figure 4B2), which confirms previous results. 
As in Experiment I, a 2 × 2 repeated measures ANOVA (single-/dual-task condition × superordinate/basic categorization) was computed. Performance was higher in the single-task condition than in the dual-task condition (F(1,5) = 30.19, p < 0.05) and there was no effect of the level of categorization (F(1,5) = 3.31, p = 0.13). Again, the interaction between the main factors was not significant (F(1,5) = 1.56, p = 0.3) indicating that withdrawing attention similarly affects both levels of categorization. Man-made stimuli seem to suffer slightly more than natural stimuli when they are categorized outside the focus of attention both at the superordinate and basic levels. However, normalized performance in the dual-task condition was still above 85% of the performance obtained in the single-task condition and did not drop to chance as observed for the disk discrimination task. Basic- and superordinate-level categorization tasks of man-made stimuli can be performed efficiently in the near absence of attention. 
Biologically relevant or not, object categorization at the superordinate level and at the basic level was performed efficiently concurrently with an attentionally demanding task. The summary of the results obtained in Experiments I and II is represented in Figure 5. Mean normalized performance for all natural scene categorization tasks lay above 90%, whereas mean normalized performance for the disk discrimination task was at chance level. 
Figure 5
 
Summary of the results for the peripheral tasks in Experiments I and II in the dual-task condition. Each circle represents the mean of the normalized performance across all subjects for each task. Error bars represent the SEM. Natural scene categorization was performed with high levels of accuracy whatever the level of categorization (superordinate or basic) or the nature of the object (biologically relevant: animal, or not: vehicle) when attention was not fully available. In contrast, performance on the disk discrimination task was reduced to the level of chance in the same dual-task paradigm.
Figure 5
 
Summary of the results for the peripheral tasks in Experiments I and II in the dual-task condition. Each circle represents the mean of the normalized performance across all subjects for each task. Error bars represent the SEM. Natural scene categorization was performed with high levels of accuracy whatever the level of categorization (superordinate or basic) or the nature of the object (biologically relevant: animal, or not: vehicle) when attention was not fully available. In contrast, performance on the disk discrimination task was reduced to the level of chance in the same dual-task paradigm.
Experiment III: Effect of stimulus presentation time on peripheral task performance
For both the superordinate- and basic-level categorization tasks, we equated task difficulty by adjusting the SOAs such that performance in the single-task condition was approximately 75% (see Methods section). However, to achieve such levels of performance, the basic-level tasks required SOAs that were approximately twice as long as those of the corresponding superordinate-level tasks (∼180 ms for the dog/non-dog task and ∼140 ms for the car/non-car compared to ∼85 ms for the animal/non-animal task and ∼87 ms for the vehicle/non-vehicle task). This finding suggests that at short exposure times, participants could reliably detect an animal or a vehicle in a scene (superordinate-level category) in the near absence of attention but might be unable to access basic-level details of that scene (Fei-Fei et al., 2007; Greene & Oliva, 2009; Joubert et al., 2007). To address this question, we asked participants to perform basic-level categorization using the same shorter SOA that allowed for successful categorization at the superordinate level in Experiments I and II. 
In Experiment III, the same experimental paradigm as in Experiments I and II was used, but participants were asked only to perform the peripheral categorization tasks in the single-task condition (when attention was fully available). Subjects already tested in Experiment I were tested on the animal/non-animal and dog/non-dog animal tasks, and subjects already tested in Experiment II were tested on the vehicle/non-vehicle and car/non-car vehicle tasks. The critical variable that was manipulated was the SOAs. For both the superordinate- and basic-level categorization tasks, the SOA used in Experiment III was the one determined for each subject in Experiments I and II for the superordinate categorization tasks (range from 60 to 110 ms for animal/non-animal and from 70 to 110 ms for vehicle/non-vehicle). As expected, performance on the animal/non-animal task was the same in Experiments I and III (paired t-test, p = 0.4; Figure 6B), as was performance on the vehicle/non-vehicle task in Experiments II and III (paired t-test, p = 0.2; Figure 6D). On the other hand, a significant drop in performance was observed for basic-level performance. Performance on the dog/non-dog task in Experiment III (64.5 ± 4.0%) dropped significantly compared to the performance in the same task in Experiment I (74.8 ± 2.1%, paired t-test, p < 0.01) when longer SOAs were used (∼180 ms; Figure 6A). Similarly, performance on the car/non-car task in Experiment III (75.0 ± 1.9%) dropped significantly compared to performance in Experiment II (82.3 ± 3.1%; paired t-test, p < 0.05) when SOAs at around 140 ms were used (Figure 6C). Thus, these results indicate that although participants could reliably report the presence of an animal or a vehicle in a scene with a short stimulus presentation time, they often did not have any further information about what type of animal or vehicle they had just seen. To access information at the basic level, participants needed longer stimulus presentation times, even though they could perform the task in the near absence of attention. 
Figure 6
 
Mean performance on the single peripheral tasks in Experiments I and II (dark color) and in Experiment III (light color). In Experiments I and II, stimulus durations used for the basic-level categorization tasks, dog/non-dog (∼180 ms) and car/non-car (∼140 ms), were around twice those used for the corresponding superordinate-level categorization tasks, animal/non-animal (∼85 ms) and vehicle/non-vehicle (∼87 ms). With these durations, accuracy was comparable for the two levels of categorization. In Experiment III, we set the stimulus durations for all tasks to those obtained in the corresponding superordinate-level categorization task in Experiment I (∼85 ms) or II (∼87 ms) for each participant. Performance on the superordinate-level categorization tasks was comparable between Experiments I and III (A: animal/non-animal task) and between Experiments II and III (C: vehicle/non-vehicle task). However, performance was significantly lower on the basic-level categorization task in Experiment III compared to that obtained with the longer presentation times in Experiment I (B: dog/non-dog task) and in Experiment II (D: car/non-car task). Accessing basic-level information in the periphery required longer stimulus presentation times than accessing superordinate-level information.
Figure 6
 
Mean performance on the single peripheral tasks in Experiments I and II (dark color) and in Experiment III (light color). In Experiments I and II, stimulus durations used for the basic-level categorization tasks, dog/non-dog (∼180 ms) and car/non-car (∼140 ms), were around twice those used for the corresponding superordinate-level categorization tasks, animal/non-animal (∼85 ms) and vehicle/non-vehicle (∼87 ms). With these durations, accuracy was comparable for the two levels of categorization. In Experiment III, we set the stimulus durations for all tasks to those obtained in the corresponding superordinate-level categorization task in Experiment I (∼85 ms) or II (∼87 ms) for each participant. Performance on the superordinate-level categorization tasks was comparable between Experiments I and III (A: animal/non-animal task) and between Experiments II and III (C: vehicle/non-vehicle task). However, performance was significantly lower on the basic-level categorization task in Experiment III compared to that obtained with the longer presentation times in Experiment I (B: dog/non-dog task) and in Experiment II (D: car/non-car task). Accessing basic-level information in the periphery required longer stimulus presentation times than accessing superordinate-level information.
Discussion
The purpose of the present study was to understand the nature of the additional processing observed at both behavioral (Joubert et al., 2007; Mace et al., 2009) and neuronal (Sugase et al., 1999) levels when finer or more detailed information about a stimulus is to be accessed and, more specifically, to determine if the longer processing time could be explained—at least partly—by a necessary allocation of attentional resources. Previous studies have shown that categorization tasks are feasible at the superordinate level even under conditions when spatial attention is minimally available (Fei-Fei et al., 2005; Li et al., 2002; Rousselet et al., 2002; VanRullen et al., 2004). In the current study, we asked whether finer grained discriminations of these stimuli at the basic level would necessitate the deployment of attention. In other words, although subjects can rapidly detect the presence of an animal in a natural scene (Thorpe, Fize, & Marlot, 1996), without engaging attention (Li et al., 2002; Rousselet et al., 2002), does knowing whether the animal in the scene was (for example) a dog necessarily involve attentional resources? Our results indicate that basic-level categorization of both biologically relevant and man-made objects in natural scenes requires longer presentation times but is feasible even in the absence or near absence of spatial attention. 
Presentation time affects rapid stimulus recognition but does not affect the conclusions of this study. In fact, the stimulus SOA required for performing the peripheral task at a given threshold is by no means a predictor of the task's attentional requirements. Indeed, the disk discrimination task can be performed with a short presentation time (∼75 ms) but cannot be performed in the dual-task condition, unlike natural scene categorization tasks. However, the presentation time of the stimuli may reflect the complexity of the task. Access from coarser to finer object representations (Joubert et al., 2007; Mace et al., 2009; Sugase et al., 1999) requires a longer time for uptake of the relevant visual information, but once this information is obtained, further processing can proceed in the near absence of spatial attention. 
Nonetheless, for the natural scene categorization tasks, there was a slight decrement in performance in the dual-task condition compared to the single-task condition (1.3% in the dog/non-dog task and 4% in the car/non-car task). However, small decrements in performance are expected to occur when participants perform two tasks simultaneously and do not necessarily imply a competition for attentional resources. For instance, decrements in performance could also be due to factors such as remembering two different targets or coding and executing two responses simultaneously in the dual-task condition (Duncan, 1980; Pashler, 1994). It is actually quite remarkable to observe such small performance drops in the tasks used here. 
It has previously been shown that even when a natural scene was presented very briefly in the periphery (26 ms), observers were able to categorize at the superordinate level fairly accurately (Rousselet et al., 2002, 2004). This seems apparently contradictory to our findings since we used longer presentation times. Such a discrepancy can be explained by the number of locations in which the peripheral stimulus could appear. Indeed, in the above-mentioned studies, there was a drop in accuracy when using four peripheral locations (Rousselet et al., 2004) vs. only two peripheral locations (Rousselet et al., 2002; 80.7 ± 1.1 vs. 90.4 ± 0.6), and in our study, the peripheral stimulus could appear at any point on the edge of an imaginary rectangle (see Methods section). Moreover, unlike in the present experiments, the stimuli used in the above studies were not masked. We know from backward masking studies that masks effectively interrupt further information uptake from the images (Kahneman, 1968) and that the excitatory neuronal response that occurs just after the disappearance of the target is (Freedman & Miller, 2008). The absence of a mask, therefore, must have allowed for much longer information uptake of the stimulus than what the short presentation duration might imply. By using a mask, we could precisely determine, at least for a peripherally presented stimulus, how much exposure to the stimulus is needed to accurately categorize a given stimulus at a given level. 
Recently, a few studies have obtained contradictory findings about object detection and categorization performance. Initially, Grill-Spector and Kanwisher (2005) proposed that object detection and basic-level categorization may occur at the same early stage of visual processing. However, subsequent studies found results that are inconsistent with this claim (Bowers & Jones, 2008; de la Rosa, Choudhery, & Chatziastros, 2011; Mack, Gauthier, Sadr, & Palmeri, 2008; Mack & Palmeri, 2010). In our study, we show clearly that with the same presentation time, participants could categorize peripheral natural scenes more accurately at the superordinate level than at the basic level (Experiment III). Because images were presented in the periphery in our paradigm, it is possible that any processing differences between category levels, even if small, were amplified, allowing a clear distinction to be manifest, between object detection and categorization. Thus, our results suggest that information about a stimulus accumulates over its presentation duration and, hence, argue in favor of a dissociation between the different levels of object recognition, which is consistent with the finding that superordinate-level categories are processed faster than basic-level categories (Mace et al., 2009). 
Natural object categorization (i.e., of biologically pertinent stimuli) could benefit from neuronal populations that have evolved to selectively process these types of stimuli (New et al., 2007). However, our results show that basic-level categorization of natural scenes can be performed in the near absence of attention for not only natural objects but also man-made objects. Given that man-made objects are relatively new in our environment, it is improbable that the ability to categorize visual scenes with little attention is an innate ability (Polk & Farah, 1998). A more likely explanation might be that certain neural networks are predisposed to form representations of stimuli that are often encountered in the environment (Aguirre, Zarahn, & D'Esposito, 1998). Models of visual processing have been developed based on a similar assertion that selective representations for familiar visual stimuli could develop with experience (Riesenhuber & Poggio, 1999; Serre, Oliva, & Poggio, 2007). fMRI studies, showing stimulus-specific activity in the posterior ventro-temporal cortex for categories such as faces and buildings, provide further support for this view (Aguirre et al., 1998; Epstein & Kanwisher, 1998; Haxby et al., 1999; Kanwisher, McDermott, & Chun, 1997; Puce, Allison, Asgari, Gore, & McCarthy, 1996). Similarly, category-specific responses have been observed in single neurons in the human medial temporal lobe (Aguirre et al., 1998; Chao, Haxby, & Martin, 1999; Epstein & Kanwisher, 1998; Kreiman, Koch, & Fried, 2000). VanRullen et al. have proposed that such selective representations could underlie visual processing of natural stimuli in the near absence of attention. In contrast, discrimination of unfamiliar stimuli (e.g., the bisected disks in Experiments I and II, novel stimuli, or low-frequency objects), for which such selective representations would not exist would necessarily involve attentional resources (Fei-Fei et al., 2005; VanRullen, 2009; VanRullen et al., 2004). 
Experience and familiarity with stimuli may shape the selectivity of neural networks and enable rapid processing of familiar objects without requiring attention. On the other hand, with longer stimulus durations, the selectivity of neurons in higher order visual areas may increase and activate a deeper representation of the stimulus (Keysers, Xiao, Foldiak, & Perrett, 2001). This may explain why the presentation time required for basic-level categorization is longer than for superordinate-level categorization. However, the activation of these fine representations does not need attentional resources. 
Conclusion
In this study, we have shown that when attention is engaged by an attentionally demanding task, basic-level categorization of natural (dog/non-dog animal) and man-made objects (car/non-car vehicle) can still be performed efficiently. This reveals that fine-grained representations of familiar objects can be accessed in the near absence of attention, presumably because they are supported by hardwired, specialized networks (Fei-Fei et al., 2005; VanRullen, 2009; VanRullen et al., 2004). However, this categorization depends on the presentation time of the stimuli (the more detailed the task, the longer the stimulus duration). In other words, we “know” that it is an animal before we “know” that it is a dog, but neither recognition process requires attention. 
Acknowledgments
We thank Ramakrishna Chakravarthi for providing helpful comments on the manuscript. This research was supported by the CNRS and the University Paul Sabatier Toulouse III and by a Fyssen Foundation Grant to LR. 
Commercial relationships: none. 
Corresponding author: Michele Fabre-Thorpe. 
Email: mft@cerco.ups-tlse.fr. 
Address: CNRS CERCO UMR 5549, Pavillon Baudot, CHU Purpan BP 25202, 31052 Toulouse Cedex, France. 
References
Aguirre G. K. Zarahn E. D'Esposito M. (1998). An area within human ventral cortex sensitive to “building” stimuli: Evidence and implications. Neuron, 21, 373–383. [CrossRef] [PubMed]
Ahissar M. Nahum M. Nelken I. Hochstein S. (2009). Reverse hierarchies and sensory learning. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364, 285–299. [CrossRef]
Bowers J. S. Jones K. W. (2008). Detecting objects is easier than categorizing them. Quarterly Journal of Experimental Psychology, 61, 552–557. [CrossRef]
Braun J. (1994). Visual search among items of different salience: Removal of visual attention mimics a lesion in extrastriate area V4. Journal of Neuroscience, 14, 554–567. [PubMed]
Braun J. Julesz B. (1998). Withdrawing attention at little or no cost: Detection and discrimination tasks. Perception & Psychophysics, 60, 1–23. [CrossRef] [PubMed]
Braun J. Sagi D. (1990). Vision outside the focus of attention. Perception & Psychophysics, 48, 45–58. [CrossRef] [PubMed]
Chao L. L. Haxby J. V. Martin A. (1999). Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience, 2, 913–919. [CrossRef] [PubMed]
Cromer J. A. Roy J. E. Miller E. K. (2010). Representation of multiple, independent categories in the primate prefrontal cortex. Neuron, 66, 796–807. [CrossRef] [PubMed]
de la Rosa S. Choudhery R. N. Chatziastros A. (2011). Visual object detection, categorization, and identification tasks are associated with different time courses and sensitivities. Journal of Experimental Psychology: Human Perception and Performance, 37, 38–47. [CrossRef] [PubMed]
Duncan J. (1980). The locus of interference in the perception of simultaneous stimuli. Psychological Review, 87, 272–300. [CrossRef] [PubMed]
Epstein R. Kanwisher N. (1998). A cortical representation of the local visual environment. Nature, 392, 598–601. [CrossRef] [PubMed]
Fei-Fei L. Iyer A. Koch C. Perona P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision, 7(1):10, 1–29, http://www.journalofvision.org/content/7/1/10, doi:10.1167/7.1.10. [PubMed] [Article] [CrossRef] [PubMed]
Fei-Fei L. VanRullen R. Koch C. Perona P. (2005). Why does natural scene categorization require little attention Exploring attentional requirements for natural and synthetic stimuli. Visual Cognition, 12, 893–924. [CrossRef]
Freedman D. J. Miller E. K. (2008). Neural mechanisms of visual categorization: Insights from neurophysiology. Neuroscience and Biobehavioral Reviews, 32, 311–329. [CrossRef] [PubMed]
Freedman D. J. Riesenhuber M. Poggio T. Miller E. K. (2003). A comparison of primate prefrontal and inferior temporal cortices during visual categorization. Journal of Neuroscience, 23, 5235–5246. [PubMed]
Gegenfurtner K. R. Rieger J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10, 805–808. [CrossRef] [PubMed]
Greene M. R. Oliva A. (2009). The briefest of glances: The time course of natural scene understanding. Psychological Science, 20, 464–472. [CrossRef] [PubMed]
Grill-Spector K. Kanwisher N. (2005). Visual recognition. Psychological Science, 16, 152–160. [CrossRef] [PubMed]
Haxby J. V. Ungerleider L. G. Clark V. P. Schouten J. L. Hoffman E. A. Martin A. (1999). The effect of face inversion on activity in human neural systems for face and object perception. Neuron, 22, 189–199. [CrossRef] [PubMed]
Joubert O. R. Fize D. Rousselet G. A. Fabre-Thorpe M. (2008). Early interference of context congruence on object processing in rapid visual categorization of natural scenes. Journal of Vision, 8(13):11, 1–18, http://www.journalofvision.org/content/8/13/11, doi:10.1167/8.13.11. [PubMed] [Article] [CrossRef] [PubMed]
Joubert O. R. Rousselet G. A. Fize D. Fabre-Thorpe M. (2007). Processing scene context: Fast categorization and object interference. Vision Research, 47, 3286–3297. [CrossRef] [PubMed]
Julesz B. Schumer R. A. (1981). Early visual perception. Annual Review of Psychology, 32, 575–627. [CrossRef] [PubMed]
Kahneman D. (1968). Method, findings, and theory in studies of visual masking. Psychological Bulletin, 70, 404–425. [CrossRef] [PubMed]
Kanwisher N. McDermott J. Chun M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. [PubMed]
Keysers C. Xiao D. K. Foldiak P. Perrett D. I. (2001). The speed of sight. Journal of Cognitive Neuroscience, 13, 90–101. [CrossRef] [PubMed]
Kreiman G. Koch C. Fried I. (2000). Category-specific visual responses of single neurons in the human medial temporal lobe. Nature Neuroscience, 3, 946–953. [CrossRef] [PubMed]
Lee D. K. Koch C. Braun J. (1999). Attentional capacity is undifferentiated: Concurrent discrimination of form, color, and motion. Perception & Psychophysics, 61, 1241–1255. [CrossRef] [PubMed]
Li F. F. VanRullen R. Koch C. Perona P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences of the United States of America, 99, 9596–9601. [CrossRef] [PubMed]
Mace M. J. Joubert O. R. Nespoulous J. L. Fabre-Thorpe M. (2009). The time-course of visual categorizations: You spot the animal faster than the bird. PLoS One, 4, e5927.
Mack M. L. Gauthier I. Sadr J. Palmeri T. J. (2008). Object detection and basic-level categorization: Sometimes you know it is there before you know what it is. Psychonomic Bulletin & Review, 15, 28–35. [CrossRef] [PubMed]
Mack M. L. Palmeri T. J. (2010). Decoupling object detection and categorization. Journal of Experimental Psychology: Human Perception and Performance, 36, 1067–1079. [CrossRef] [PubMed]
Meyers E. M. Freedman D. J. Kreiman G. Miller E. K. Poggio T. (2008). Dynamic population coding of category information in inferior temporal and prefrontal cortex. Journal of Neurophysiology, 100, 1407–1419. [CrossRef] [PubMed]
Murphy G. L. Brownell H. H. (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 70–84. [CrossRef] [PubMed]
New J. Cosmides L. Tooby J. (2007). Category-specific attention for animals reflects ancestral priorities, not expertise. Proceedings of the National Academy of Sciences of the United States of America, 104, 16598–16603. [CrossRef] [PubMed]
Pashler H. (1994). Dual-task interference in simple tasks—Data and theory. Psychological Bulletin, 116, 220–244. [CrossRef] [PubMed]
Polk T. A. Farah M. J. (1998). The neural development and organization of letter recognition: Evidence from functional neuroimaging, computational modeling, and behavioral studies. Proceedings of the National Academy of Sciences of the United States of America, 95, 847–852. [CrossRef] [PubMed]
Puce A. Allison T. Asgari M. Gore J. C. McCarthy G. (1996). Differential sensitivity of human visual cortex to faces, letterstrings, and textures: A functional magnetic resonance imaging study. Journal of Neuroscience, 16, 5205–5215. [PubMed]
Reddy L. Wilken P. Koch C. (2004). Face–gender discrimination is possible in the near-absence of attention. Journal of Vision, 4(2):4, 106–117, http://www.journalofvision.org/content/4/2/4, doi:10.1167/4.2.4. [PubMed] [Article] [CrossRef]
Riesenhuber M. Poggio T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2, 1019–1025. [CrossRef] [PubMed]
Rosch E. Mervis C. B. Gray W. D. Johnson D. M. Boyes-Braem P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. [CrossRef]
Rossion B. Pourtois G. (2004). Revisiting Snodgrass and Vanderwart's object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33, 217–236. [CrossRef] [PubMed]
Rousselet G. A. Fabre-Thorpe M. Thorpe S. J. (2002). Parallel processing in high-level categorization of natural images. Nature Neuroscience, 5, 629–630. [PubMed]
Rousselet G. A. Thorpe S. J. Fabre-Thorpe M. (2004). Processing of one, two or four natural scenes in humans: The limits of parallelism. Vision Research, 44, 877–894. [CrossRef] [PubMed]
Russell B. Torralba A. Murphy K. Freeman W. (2008). LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision, 77, 157–173. [CrossRef]
Serre T. Oliva A. Poggio T. (2007). A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences of the United States of America, 104, 6424–6429. [CrossRef] [PubMed]
Sugase Y. Yamane S. Ueno S. Kawano K. (1999). Global and fine information coded by single neurons in the temporal visual cortex. Nature, 400, 869–873. [CrossRef] [PubMed]
Thorpe S. Fize D. Marlot C. (1996). Speed of processing in the human visual system. Nature, 381, 520–522. [CrossRef] [PubMed]
Treisman A. M. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [CrossRef] [PubMed]
VanRullen R. (2009). Binding hardwired versus on-demand feature conjunctions. Visual Cognition, 17, 103–119. [CrossRef]
VanRullen R. Reddy L. Koch C. (2004). Visual search and dual tasks reveal two distinct attentional resources. Journal of Cognitive Neuroscience, 16, 4–14. [CrossRef] [PubMed]
Yip A. W. Sinha P. (2002). Contribution of color to face recognition. Perception, 31, 995–1003. [CrossRef] [PubMed]
Figure 1
 
Examples of natural scene images used in the study. In Experiment I (top row), the objects contained in the images were biologically pertinent and could be categorized as (A) animal at the superordinate level or (B) dog at the basic level. In Experiment II (bottom row), stimuli were man-made objects categorized as (C) vehicle at the superordinate level or (D) car at the basic level.
Figure 1
 
Examples of natural scene images used in the study. In Experiment I (top row), the objects contained in the images were biologically pertinent and could be categorized as (A) animal at the superordinate level or (B) dog at the basic level. In Experiment II (bottom row), stimuli were man-made objects categorized as (C) vehicle at the superordinate level or (D) car at the basic level.
Figure 2
 
Schematic timeline for one trial in the dual-task paradigm. At the end of a trial, participants were asked to perform the central letter discrimination task (whether all letters were the same or one was different from the others) and/or the peripheral task (e.g., whether a dog was present or not). The display was the same for all conditions and only the instructions differed. The letters and the peripheral stimulus were masked individually. The central SOA and the peripheral SOA indicate the presentation time for the letters (∼215 ms) and for the dog/non-dog task (∼180 ms), respectively. They were adjusted for each subject and each task.
Figure 2
 
Schematic timeline for one trial in the dual-task paradigm. At the end of a trial, participants were asked to perform the central letter discrimination task (whether all letters were the same or one was different from the others) and/or the peripheral task (e.g., whether a dog was present or not). The display was the same for all conditions and only the instructions differed. The letters and the peripheral stimulus were masked individually. The central SOA and the peripheral SOA indicate the presentation time for the letters (∼215 ms) and for the dog/non-dog task (∼180 ms), respectively. They were adjusted for each subject and each task.
Table 1
 
Procedure followed by each subject to determine the SOAs required in each task. SOAs were adjusted after each block as follows: They were decreased by 10 ms when participants' performance in the single-task condition was above 85% and increased by 10 ms when it fell below 75%. Note that the peripheral SOA determined in Step 1 is not used subsequently as it refers to a categorization task (female/male faces) that we used to train subjects on the dual-task paradigm and to determine central SOAs. Peripheral SOAs relevant for the present experiments are individually determined in Step 2.
Table 1
 
Procedure followed by each subject to determine the SOAs required in each task. SOAs were adjusted after each block as follows: They were decreased by 10 ms when participants' performance in the single-task condition was above 85% and increased by 10 ms when it fell below 75%. Note that the peripheral SOA determined in Step 1 is not used subsequently as it refers to a categorization task (female/male faces) that we used to train subjects on the dual-task paradigm and to determine central SOAs. Peripheral SOAs relevant for the present experiments are individually determined in Step 2.
Figure 3
 
Results of six participants in the dual-task paradigm for biological and artificial stimuli in the periphery. (1) Individual results. The horizontal axis represents performance on the central attentionally demanding letter discrimination task. The vertical axis represents accuracy (A) on the peripheral basic-level categorization task (dog/non-dog animal), (B) on the superordinate-level categorization task (animal/non-animal), and (C) on the color pattern discrimination task. Participants' mean performance is represented by a blue circle in the single-task condition (single central task and single peripheral task) and a red circle in the dual-task condition. Each black point represents participants' performance for a 48-trial block in the dual-task condition. For plotting purposes, we assume that in the single-task condition performance on the other task was at chance (50%). The error bars represent the SEM. For all participants, performance in the dual-task condition was not significantly different from performance in the single-task condition (paired t-test, p > 0.05) except GV in the dog/non-dog animal task (A1) and except GBJ in the animal/non-animal task (B1). On the contrary, performance of all participants in the color pattern discrimination task (C1) was dramatically impaired in the dual-task condition compared to the single-task condition (paired t-test, p < 10−5). (2) Normalized results. Each circle represents the mean of one participant's performance in the dual-task condition, normalized by his/her performance in the single-task condition. Normalized values are obtained by a linear scaling that maps the average single-task performance to 100%, leaving chance at 50% (see Methods section). These results demonstrate that participants cannot perform an attentionally demanding task when attentional resources are removed from the periphery (C2), but discriminating biologically relevant stimuli at the superordinate (B2) and basic levels (A2) is robust even in the near absence of attention.
Figure 3
 
Results of six participants in the dual-task paradigm for biological and artificial stimuli in the periphery. (1) Individual results. The horizontal axis represents performance on the central attentionally demanding letter discrimination task. The vertical axis represents accuracy (A) on the peripheral basic-level categorization task (dog/non-dog animal), (B) on the superordinate-level categorization task (animal/non-animal), and (C) on the color pattern discrimination task. Participants' mean performance is represented by a blue circle in the single-task condition (single central task and single peripheral task) and a red circle in the dual-task condition. Each black point represents participants' performance for a 48-trial block in the dual-task condition. For plotting purposes, we assume that in the single-task condition performance on the other task was at chance (50%). The error bars represent the SEM. For all participants, performance in the dual-task condition was not significantly different from performance in the single-task condition (paired t-test, p > 0.05) except GV in the dog/non-dog animal task (A1) and except GBJ in the animal/non-animal task (B1). On the contrary, performance of all participants in the color pattern discrimination task (C1) was dramatically impaired in the dual-task condition compared to the single-task condition (paired t-test, p < 10−5). (2) Normalized results. Each circle represents the mean of one participant's performance in the dual-task condition, normalized by his/her performance in the single-task condition. Normalized values are obtained by a linear scaling that maps the average single-task performance to 100%, leaving chance at 50% (see Methods section). These results demonstrate that participants cannot perform an attentionally demanding task when attentional resources are removed from the periphery (C2), but discriminating biologically relevant stimuli at the superordinate (B2) and basic levels (A2) is robust even in the near absence of attention.
Figure 4
 
Results of six participants in the dual-task paradigm for man-made object categorization tasks in the periphery. Legend as in Figure 3. Performance in the dual-task paradigm was tested for peripheral categorization at the (A) basic (car/non-car vehicle) and (B) superordinate levels (vehicle/non-vehicle). There was no significant difference in performance between the single-task condition and the dual-task condition (paired t-test, p > 0.05) on the basic-level categorization task for MP and GBJ (A1) and for MP, GV, and RC on the superordinate-level categorization (vehicle/non-vehicle) task (B1). The normalized dual-task performance for the basic-level categorization task (A2) and for the superordinate-level categorization (B2) was above 85% of the performance in the single-task condition. However, when tested in the same paradigm in a color pattern discrimination task, participants' performance was at chance level (not shown here). This suggests that although performance was slightly lower in the dual-task condition compared to the single-task condition, participants could perform man-made stimulus categorization tasks at the basic and superordinate levels in the near absence of attention.
Figure 4
 
Results of six participants in the dual-task paradigm for man-made object categorization tasks in the periphery. Legend as in Figure 3. Performance in the dual-task paradigm was tested for peripheral categorization at the (A) basic (car/non-car vehicle) and (B) superordinate levels (vehicle/non-vehicle). There was no significant difference in performance between the single-task condition and the dual-task condition (paired t-test, p > 0.05) on the basic-level categorization task for MP and GBJ (A1) and for MP, GV, and RC on the superordinate-level categorization (vehicle/non-vehicle) task (B1). The normalized dual-task performance for the basic-level categorization task (A2) and for the superordinate-level categorization (B2) was above 85% of the performance in the single-task condition. However, when tested in the same paradigm in a color pattern discrimination task, participants' performance was at chance level (not shown here). This suggests that although performance was slightly lower in the dual-task condition compared to the single-task condition, participants could perform man-made stimulus categorization tasks at the basic and superordinate levels in the near absence of attention.
Figure 5
 
Summary of the results for the peripheral tasks in Experiments I and II in the dual-task condition. Each circle represents the mean of the normalized performance across all subjects for each task. Error bars represent the SEM. Natural scene categorization was performed with high levels of accuracy whatever the level of categorization (superordinate or basic) or the nature of the object (biologically relevant: animal, or not: vehicle) when attention was not fully available. In contrast, performance on the disk discrimination task was reduced to the level of chance in the same dual-task paradigm.
Figure 5
 
Summary of the results for the peripheral tasks in Experiments I and II in the dual-task condition. Each circle represents the mean of the normalized performance across all subjects for each task. Error bars represent the SEM. Natural scene categorization was performed with high levels of accuracy whatever the level of categorization (superordinate or basic) or the nature of the object (biologically relevant: animal, or not: vehicle) when attention was not fully available. In contrast, performance on the disk discrimination task was reduced to the level of chance in the same dual-task paradigm.
Figure 6
 
Mean performance on the single peripheral tasks in Experiments I and II (dark color) and in Experiment III (light color). In Experiments I and II, stimulus durations used for the basic-level categorization tasks, dog/non-dog (∼180 ms) and car/non-car (∼140 ms), were around twice those used for the corresponding superordinate-level categorization tasks, animal/non-animal (∼85 ms) and vehicle/non-vehicle (∼87 ms). With these durations, accuracy was comparable for the two levels of categorization. In Experiment III, we set the stimulus durations for all tasks to those obtained in the corresponding superordinate-level categorization task in Experiment I (∼85 ms) or II (∼87 ms) for each participant. Performance on the superordinate-level categorization tasks was comparable between Experiments I and III (A: animal/non-animal task) and between Experiments II and III (C: vehicle/non-vehicle task). However, performance was significantly lower on the basic-level categorization task in Experiment III compared to that obtained with the longer presentation times in Experiment I (B: dog/non-dog task) and in Experiment II (D: car/non-car task). Accessing basic-level information in the periphery required longer stimulus presentation times than accessing superordinate-level information.
Figure 6
 
Mean performance on the single peripheral tasks in Experiments I and II (dark color) and in Experiment III (light color). In Experiments I and II, stimulus durations used for the basic-level categorization tasks, dog/non-dog (∼180 ms) and car/non-car (∼140 ms), were around twice those used for the corresponding superordinate-level categorization tasks, animal/non-animal (∼85 ms) and vehicle/non-vehicle (∼87 ms). With these durations, accuracy was comparable for the two levels of categorization. In Experiment III, we set the stimulus durations for all tasks to those obtained in the corresponding superordinate-level categorization task in Experiment I (∼85 ms) or II (∼87 ms) for each participant. Performance on the superordinate-level categorization tasks was comparable between Experiments I and III (A: animal/non-animal task) and between Experiments II and III (C: vehicle/non-vehicle task). However, performance was significantly lower on the basic-level categorization task in Experiment III compared to that obtained with the longer presentation times in Experiment I (B: dog/non-dog task) and in Experiment II (D: car/non-car task). Accessing basic-level information in the periphery required longer stimulus presentation times than accessing superordinate-level information.
Table 2
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment I.
Table 2
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment I.
Central SOA (ms) Peripheral SOAs (ms)
Letters Dog Animal Disk
MP 220 170 90 70
LR 200 170 100 50
GV 170 140 60 60
GBJ 260 220 110 90
LD 250 205 80 60
RV 200 170 70 70
Table 3
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment II.
Table 3
 
Stimulus presentation time (SOA) used in the final testing session for the 6 participants in Experiment II.
Central SOA (ms) Peripheral SOAs (ms)
Letters Car Vehicle Disk
MP 220 160 70 70
GV 170 105 75 60
GBJ 260 145 90 90
LD 250 175 75 60
RS 200 140 100 120
RC 190 130 110 80
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×