Free
Research Article  |   September 2010
Fur in the midst of the waters: Visual search for material type is inefficient
Author Affiliations
Journal of Vision September 2010, Vol.10, 8. doi:10.1167/10.9.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeremy M. Wolfe, Loretta Myers; Fur in the midst of the waters: Visual search for material type is inefficient. Journal of Vision 2010;10(9):8. doi: 10.1167/10.9.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A limited set of attributes can guide visual selective attention. Thus, it is possible to deploy attention to an item defined by an appropriate color, size, or orientation but not to a specific type of line intersection or a specific letter (assuming other attributes like orientation are controlled). What defines the set of guiding attributes? Perhaps it is the set of attributes of surfaces or materials in the world. L. Sharan, R. E. Rosenholtz, and E. H. Adelson (submitted for publication) have shown that observers are extremely adept at identifying materials. Are they equally adept at guiding attention to one type of material among distractors of another? A series of visual search experiments shows that the answer is “no.” It may be easy to identify “fur” or “stone,” but search for a patch of fur among the stones will be inefficient.

Introduction
We search for things all day long. Even if the object of desire is in plain view, we may have to search because we are incapable of instantly and simultaneously recognizing all of the objects in the visual field. Most, if not all, acts of object recognition require that we select the target object by directing attention to it. Fortunately, we do not search randomly. Our attention can be guided by attributes of the target object (Egeth, Virzi, & Garbart, 1984; Williams, 1966; Wolfe, Cave, & Franzel, 1989). Defining the set of guiding attributes has been a research project for over 25 years (Treisman, 1985). In its original form, this was a search for the “preattentive” features that could be found in the first, parallel stage of processing proposed in Treisman's Feature Integration Theory (Treisman & Gelade, 1980). (Note: Henceforth, we try to use “attribute” to refer to a type of property like color and “feature” to refer to an instance of an attribute; e.g., red.) One of the diagnostics of a preattentive feature was that it could be found in a visual search display in a time that was independent of the number of distractor items in the display. That is, the slope of the function relating set size to reaction time (RT) would be near zero. 
In Treisman's original formulation, there were parallel searches that produced these near-zero slopes and serial searches that did not. It subsequently became clear that there was a continuum of search efficiencies (Wolfe, 1998a) and that an important factor in determining search efficiency was the degree to which preattentive feature information could be used to “guide” attention (Wolfe, 1994, 2007; Wolfe et al., 1989). Thus, if observers (Os) were looking for a red “T” among black and red “Ls”, the RT would depend on the number of red items. Attention could be guided toward them and not wasted on black letters (Egeth et al., 1984; Kaptein, Theeuwes, & Van der Heijden, 1994). 
What are the attributes that can guide attention? It is possible to catalog them (Thornton & Gilden, 2007; Treisman, 1986b; Wolfe, 1998a; Wolfe & Horowitz, 2004). Fundamental properties like color, size, motion, and orientation appear on essentially all lists. Most lists include a variety of less obvious properties like lighting direction (Sun & Perona, 1998) and various depth cues (e.g., Epstein & Babler, 1990; He & Nakayama, 1995; Ramachandran, 1988). What has been lacking is a principled reason why some attributes can guide attention and others do not. Treisman originally proposed that preattentive features were those extracted from the input at early stages in visual processing, perhaps primary visual cortex (V1; Treisman, 1986a), but there are difficulties with this idea. First, there are attributes such as the aforementioned lighting direction and depth cues that seem to guide attention but are unlikely to be attributes processed by V1. Moreover, even the basic attributes that guide attention sometimes seem to have been processed by later stages before guiding attention. For example, in search for targets defined by size, the size attribute is not retinal size but the perceived size calculated after size constancy mechanisms have done their work (Aks & Enns, 1996) though not all guiding attributes are “post-constancy” attributes (Moore & Brown, 2001). 
One alternative hypothesis is that the basic attributes are those that would describe the properties of surfaces (He & Nakayama, 1992; Nakayama & He, 1995; Wolfe, Birnkrant, Kunar, & Horowitz, 2005). Thus, search for a lime might be guided by its green, curved, glossy, lime-textured surface properties. This idea is appealing because the goal of search—at least, outside of the laboratory—is not to find “green” or “vertical” or “shiny.” Typically, the searcher is searching for some object and the attributes of the visible surface of that object are what the searcher can use to locate the object. Thus, it might make sense for the guiding attributes to be the mid-level vision properties of surfaces rather than earlier representations of the stimulus. This would be analogous to the situation elsewhere in vision. For example, it is the task of the visual system to estimate the size of objects and not to sense the size of the retinal image (Gogel, 1969). 
This hypothesis receives an indirect boost from recent work by Sharan, Rosenholtz, and Adelson (submitted for publication). In a series of experiments, they showed that Os could identify material properties of objects very rapidly. In their study, Os were 80% accurate with a 40-ms exposure and 92% accurate after 120 ms (Sharan et al., submitted for publication). Figure 1 shows examples from one of their sets of stimuli. 
Figure 1
 
It is quite easy to identify the material in the top panel as “glass” and in the bottom panel as metal. Stimuli from Sharan et al. (submitted for publication).
Figure 1
 
It is quite easy to identify the material in the top panel as “glass” and in the bottom panel as metal. Stimuli from Sharan et al. (submitted for publication).
If Os can identify high-level material category very rapidly, perhaps material properties can guide search and perhaps the sets of basic attributes that support efficient visual search are those that define materials. Sharan et al.'s work does not address this issue because they presented Os with a single image at the focus of attention. To test this hypothesis, we had Os search for targets defined by one material property among distractors of other materials. In Experiment 1, we used stimuli from Sharan et al. They had deliberately developed a highly heterogeneous set of material exemplars in order to make sure that no non-material property was supporting the good identification behavior shown by their Os. That is, it would be of limited interest if Os could rapidly identify water only because it was always blue. As search stimuli in our Experiment 1, however, Sharan et al.'s stimuli proved to give rise to very inefficient search. Therefore, in Experiment 2, we used a set of much more homogeneous surface textures. Nevertheless, we still failed to find efficient search for one material type among distractors of another type. We conclude that it is unlikely that material type can guide search. 
Experiment 1: Diverse material exemplars
Stimuli and procedure
Sharan et al. developed two sets of stimuli: close-up and whole object. Our stimuli were drawn from the close-up set. Each stimulus showed part of an object. Some shape and depth information was available as shown in Figure 1, though the identity of the object was not always obvious. There were nine material categories: fabric, glass, leather, metal, paper, plastic, stone, water, and wood. Each image was 256 by 256 pixels and subtended 4.1 × 4.1 degrees at a viewing distance of 57.4 cm. The center of each image was 3.2 degrees from the central fixation point. Set sizes of 1, 2, 3, or 4 were displayed on mid-gray background that subtended 12.7 × 12.7 deg. 
Four conditions were tested: Stone targets among heterogeneous distractors, plastic targets among heterogeneous distractors, stone among plastic, and plastic among stone. There are dozens of other possible experiments. These conditions were picked because stone stimuli appeared to be quite distinct from plastic. Pilot experiments with other pairings produced qualitatively similar results. For each condition, Os were tested for 330 trials, the first 30 of which were considered to be practice trials. Targets were present on 50% of trials. Stimuli were visible until Os responded. Os made target-present or -absent responses via the keyboard. Os were told to respond quickly and accurately. Feedback was given after each trial. 
Observers
Twelve paid participants between the ages of 18 and 55 were tested on all conditions. Each participant reported no history of eye or muscle disorders. All had at least 20/25 acuity and passed the Ishihara Test for Color Blindness. Informed consent was obtained for all participants and each participant was paid $10/h. 
Results
Reaction times
Figure 2 shows mean RT as a function of set size for the heterogeneous distractor conditions on the left and the homogeneous distractor conditions on the right. The main finding is that all of the conditions produce inefficient searches. All slopes are significantly greater than zero (all t(11) > 3.3, all p < 0.01). The most efficient condition (stone among plastic) has a slope of 26 ms/item with a lower bound on its 95% confidence interval of 14 ms/item. 
Figure 2
 
Reaction time × set size functions for the four conditions of Experiment 1. Error bars are 1 SEM. Green lines show target-present trials. Red lines show target-absent trials. Closed symbols show plastic target data. Open symbols show stone target data.
Figure 2
 
Reaction time × set size functions for the four conditions of Experiment 1. Error bars are 1 SEM. Green lines show target-present trials. Red lines show target-absent trials. Closed symbols show plastic target data. Open symbols show stone target data.
The heterogeneous distractor conditions are harder than the homogeneous conditions, as would be expected (Duncan & Humphreys, 1989), for both target-present and -absent conditions (both F(1,11) > 5.9, p < 0.05, partial eta-squared (pes) > 0.35). 
Errors
Figure 3 shows error rates for each of the four conditions. Os made more errors than is typical in simple visual search tasks. Typical error rates in basic search tasks would be less than 10%, even at the larger set sizes (Wolfe, Palmer, & Horowitz, 2010). This is especially notable in the false alarm rates in the conditions in which distractors were heterogeneous (A and B). False alarm rates in this sort of experiment are typically very low. Here, apparently, Os confused some distractors for targets and vice versa. Two-way ANOVAs document that errors are greater in the heterogeneous distractor conditions than in the homogeneous conditions (Miss errors: F(1,11) = 16.5, p = 0.0019, pes = 0.60, FA: F(1,11) = 38.2, p < 0.0001, pes = 0.78). Miss errors are more common for plastic targets than for stone targets (F(1,11) = 6.4, p = 0.027, pes = 0.37). False alarms did not differ between target types (F(1,11) = 0.53, p = 0.48, pes = 0.05). 
Figure 3
 
Error rates for Experiment 1. Error bars are 1 SEM.
Figure 3
 
Error rates for Experiment 1. Error bars are 1 SEM.
Discussion
It is clear that while these stimuli may support rapid identification, they do not support efficient search. Indeed, these slopes are comparable or steeper than standard “inefficient” searches like a search for a T among Ls or a 2 among 5s (Wolfe, 1998b). It is important to note that our data in no way contradict Sharan's findings. The ability to perform very rapid identification of an attended item does not imply an ability to guide search on the basis of that identification. For example, a “T” can be rapidly distinguished from an “L” but a search for a T among Ls is inefficient. The same applies to isolated objects. They might be easy to identify, but search for one object among assorted other objects is quite inefficient (Vickery, King, & Jiang, 2005). Still, it seemed possible that Sharan's materials might have been too complex and too heterogeneous for present purposes. Accordingly, in Experiment 2, we repeated the experiment using a simpler, more homogeneous set of stimuli. 
Experiment 2: Simple material stimuli
Experiment 2 used relatively close-up, planar images of feather, wood, fur, water, and stone. Three conditions were tested: Feather among wood in grayscale (example shown in Figure 4), fur among water in grayscale, and stone among fur in color. Again, many other search tasks are possible, but these materials seemed very distinct from one another. If a condition such as fur among water fails to produce efficient search, it does not seem likely that material would be a general, guiding attribute. As shown in Figure 4, stimuli were packed together since denser stimuli can produce higher salience (Nothdurft, 2000) and more efficient search for weak preattentive features. For example, it is easier to find a red target among reddish orange distractors if the target and distractors are closely packed, facilitating comparisons between items. 
Figure 4
 
Search for feather among wood.
Figure 4
 
Search for feather among wood.
Larger set sizes were used in this experiment because, again, there are circumstances under which feature guidance is more effective with larger set sizes (Bravo & Nakayama, 1992). Two of the three conditions were performed with grayscale images. This eliminated the possibility of a color cue (e.g., water might be bluer than fur) and it eliminated the distracting effects of irrelevant color variation. The third condition, stone among fur, was performed in color in case the presence of color was a vital part of material identification even if it was not diagnostic. The fur and stone stimuli had heavily overlapping distributions of colors (at least, by observation). 
Stimuli were displayed against a deep blue background to provide contrast with the grayscale images. Each individual image subtended 4.1 × 4.1 degrees at a viewing distance of 57.4 cm. Set size varied from 4 (2 by 2 matrix), 9 (3 by 3 matrix), 16 (4 by 4 matrix), to 25 (5 by 5 matrix). Before performing the search task, Os saw each of the stimuli individually. They were asked to verbally categorize the stimuli and were corrected on the rare occasion of an error. Due to a programming error, images were displayed with the upper left corner of the matrix presented aligned with the upper left corner of the 20.5 × 20.5 degree display area. Thus, the 5 × 5, set size 25 grid was always centered on the screen while the smaller set sizes were not. A subsequent control experiment suggested that this did not substantially alter the results. Methods were otherwise the same as Experiment 1
Observers
Eleven paid participants between the ages of 18 and 55 were tested on all conditions. Participants reported no history of eye or muscle disorders and had 20/25 vision or better. All passed the Ishihara Tests for Color Blindness. Informed consent was obtained for all participants and each participant was paid $10/h. 
Results
Os were 95% correct on average. The worst performance was in the fur category (89%) because fur stimuli were occasionally mistaken for feather stimuli. Since Os did not search for fur among feather, this is unlikely to have been a significant factor in the search results. For the target–distractors pairs used here: stone–feather, fur–water, or feather–wood, there were no categorization confusions (i.e., no stones labeled as “feather”). 
RTs less than 200 and over 5000 ms were removed from analysis. This eliminated less than 1% of trials. Mean RTs are shown in Figure 5
Figure 5
 
RT × set size functions for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles (green) show search for stone among fur, filled circles (purple) show fur among water, and filled squares (blue) show feather among wood. Error bars show 1 SEM.
Figure 5
 
RT × set size functions for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles (green) show search for stone among fur, filled circles (purple) show fur among water, and filled squares (blue) show feather among wood. Error bars show 1 SEM.
Overall, these simpler, more homogeneous stimuli did not produce particularly efficient search. The stone–fur and fur–water conditions produce the standard inefficient search results that would be seen in a search for a 2 among 5s or a T among Ls. The fact that the stone–fur stimuli were in color while the fur–water stimuli were in grayscale does not seem to have made any qualitative difference to the results. Feather–wood, however, initially appeared to be something of an exception to the general rule that search for material type is inefficient. The target-present slope of 9 ms/item is quite efficient (but see discussion below). ANOVAs with factors of Condition and Set Size reveal significant effects on RT of condition for correct target-present (F(2,22) > 58, p < 0.0001, pes > 0.84) and -absent responses (F(2,22) > 61, p < 0.0001, pes > 0.85). The main effects of set size and the interaction of set size and condition are similarly highly significant (all p > 0.0001). 
Error data are shown in Figure 6. The high miss error rates, increasing with set size, for the fur–water and stone–fur conditions indicate that the RT × set size slopes for those conditions would be even less efficient without the apparent speed–accuracy tradeoff. The lower false alarm rates in this experiment are more typical of standard search tasks where observers' criteria favor misses over false alarms (Wolfe et al., 2010). 
Figure 6
 
Error data for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles show search for stone among fur, filled circles show fur among water, and filled squares show feather among wood. Error bars are 1 SEM.
Figure 6
 
Error data for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles show search for stone among fur, filled circles show fur among water, and filled squares show feather among wood. Error bars are 1 SEM.
The feather–wood condition produced more accurate as well as more efficient search. However, the efficiency of the feather–wood condition in this experiment is probably merely an informative artifact. Wood and feather textures have non-isotropic orientation distributions. As can be seen in Figure 4, a typical patch of wood will have a grain with a predominant orientation. As it happens, in Experiment 2, the feather textures tended to be oriented vertically while the wood textures tended to be oriented horizontally. Of course, this was accidental and neither the experimenters nor the observers were explicitly aware of this confound when the experiment was run. Nevertheless, Os seem to have been sensitive to this regularity and to have used the orientation cue to guide attention. In a subsequent control experiment with orientations randomized, five new observers produced an average target-present slope of 20 ms/item and an average target-absent slope of 46 ms/item, very similar to the inefficient fur among water condition. Overall, Experiment 2 fails to produce convincing evidence for efficient search for material type. 
General discussion
Because these search tasks produce consistently inefficient search results, we conclude that material type is not an attribute that effectively guides the deployment of attention in visual search. This conclusion suffers from the usual problem of a negative finding. Perhaps there is some set of material stimuli that would produce efficient search. This is possible. The present experiment does not come close to exhausting the possible sets of materials. Nevertheless, one would think that, if guidance by material properties was a routine aspect of guided search, it should be possible to find fur among water or stone among fur pretty easily. It is hard to imagine that there are substantially more dramatic material contrasts that await testing and that would support efficient search. 
As noted above, the failure to find efficient search for material type does not mean that Os cannot rapidly identify materials. The identification task at the start of Experiment 2 shows that Os had no trouble distinguishing our target materials from the distractor materials and Sharan et al.'s work shows that these identifications can be made rapidly even with the much more diverse materials of Experiment 1
The present results notwithstanding, it seems intuitively clear that material properties guide our search in the real world. Here the artifactually efficient search for feather among wood in Experiment 2 may be informative. Search for a material will be guided by the basic preattentive attributes that characterize that material if that information is available. To take a trivial example, a child's plastic toy, misplaced in the living room, is likely to have some distinctive color that is not part of the parents' interior design palette. It will be found efficiently, not because it is plastic material, but because it possesses unique features from the already established list of basic attributes. This is a persistent issue in the search for preattentive, guiding attributes. Once the stimulus is complex enough, it can be quite difficult to guarantee that no basic feature distinguishes between targets and distractors. For example, this fact has bedeviled the question of guidance to faces or facial expression for years (Eastwood, Smilek, & Merikle, 2001; Hansen & Hansen, 1988; Hershler & Hochstein, 2005, 2006; Nothdurft, 1993; Purcell & Stewart, 1988; Purcell, Stewart, & Skov, 1996; VanRullen, 2006). 
In an effort to eliminate other attributes as sources of guidance, we adopted the expedient of using heterogeneous collections of targets (several different patches of fur) and of distractors (several different patches of water). Even then, one can fall victim to a feature artifact, as we did in the feather–wood condition. Moreover, this strategy introduces its own problem when search turns out to be inefficient. Even well-established features may fail to produce efficient search if the distractors are suitably inhomogeneous. Thus, while color is irrefutably a basic, preattentive attribute, search will be inefficient if the target lies between the distractors in color space (as long as the colors are quite close to each other in that space). Search for a blue–green among blue and green, for example, will be inefficient even if separate searches for blue–green among blue and blue–green among green are efficient (Bauer, Jolicœur, & Cowan, 1996; D'Zmura, 1991). The logic of the present experiments is that the heterogeneity arises in dimensions other than the hypothetical material dimension under test. The wood is all wood even if it varies in orientation, spatial frequency content, and so forth. This works for a feature-like orientation. One vertical object will be easily found among horizontal objects even if all the objects differ in their color, size, and so forth (Snowden, 1998; Treisman, 1988). It does not work for materials. Finding the one stone region among the fur requires inefficient search. 
What, then, do we know about the guiding attributes in visual search? 
  1.  
    Guiding attributes do not seem to be the same as the attributes of the first stages of visual processing. There are guiding attributes that are not represented in early vision such as the various cues to depth (Enns & Rensink, 1990; Kleffner & Ramachandran, 1992) and attributes that are attributes of early vision (e.g., orientation) seem to be differently represented in search and early vision (Wolfe, Friedman-Hill, Stewart, & O'Connell, 1992).
  2.  
    Guiding attributes do not seem to map on to some simple set of middle vision properties. To offer a few examples, intersections, important for calculation of objects and occlusion, do not guide search (Bergen & Julesz, 1983; Wolfe & DiMase, 2003). Subjective contours may begin driving cells in V2 (Von der Heydt, Peterhans, & Baumgartner, 1984), but they are limited in their ability to guide search (Davis & Driver, 1994; Gurnsey, Humphrey, & Kapitan, 1992; Li, Cave, & Wolfe, 2008; Senkowski, Rottger, Grimm, Foxe, & Herrmann, 2005). In addition, the present results show that material properties do not guide search.
  3.  
    The ability of a feature to guide attention may be dissociated from its perceptual salience. Most models of salience assume that guidance and perception are making use of the same signal, but this is demonstrably untrue. The present results show a failure to efficiently search for apparently salient surface properties. As a clearer example of the dissociation of salience and guidance, consider a recent experiment in which Os searched for a desaturated target among saturated and achromatic distractors. Os might search for pink among red and white or pale green among green and white. The stimuli were carefully calibrated so that all targets lay at the perceptual midpoint between their two distractors. Nevertheless, search for the desaturated red targets was much faster than search for any other desaturated target (Lindsey et al., 2010).
These negative facts about guiding attributes lead to the conclusion that guidance makes idiosyncratic, non-perceptual use of a limited set of attributes. The guiding attributes are derived from the visual input for the purposes of guidance. In the color example, just given, it is possible to model the precise way in which photoreceptor signals are used to guide search and to show that this guiding combination of signals is different from the combination that produces color perception. It is harder to do this with other guiding attributes—even quite basic ones like orientation. It is clear that the guiding attributes are not a simple subset of some other visual processing stage (e.g., V1). The principle that governs admission to the set of guiding attributes, if there is a principle, remains unclear. The present results indicate that the set is not the set of attributes that permits rapid categorization of materials. 
Acknowledgments
We thank Michelle Greene and Lavanya Sharan for comments. We also thank Lavanya Sharan for sharing her stimuli with us. This work was supported by NEI EY017001 and ONR N000141010278. 
Commercial relationships: none. 
Corresponding author: Jeremy M. Wolfe. 
Email: wolfe@search.bwh.harvard.edu. 
Address: 75 Francis Street, Boston, MA 02115, USA. 
References
Aks D. J. Enns J. T. (1996). Visual search for size is influenced by a background texture gradient. Journal of Experimental Psychology: Human Perception and Performance, 22, 1467–1481. [CrossRef] [PubMed]
Bauer B. Jolicœur P. Cowan W. B. (1996). Visual search for colour targets that are or are not linearly-separable from distractors. Vision Research, 36, 1439–1466. [CrossRef] [PubMed]
Bergen J. R. Julesz B. (1983). Rapid discrimination of visual patterns. IEEE Transactions on Systems, Man, and Cybernetics, 13, 857–863. [CrossRef]
Bravo M. Nakayama K. (1992). The role of attention in different visual search tasks. Perception & Psychophysics, 51, 465–472. [CrossRef] [PubMed]
Davis G. Driver J. (1994). Parallel detection of Kanisza subjective figures in the human visual system. Nature, 371, 791–793. [CrossRef] [PubMed]
Duncan J. Humphreys G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458. [CrossRef] [PubMed]
D'Zmura M. (1991). Color in visual search. Vision Research, 31, 951–966. [CrossRef] [PubMed]
Eastwood J. D. Smilek D. Merikle P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63, 1004–1013. [CrossRef] [PubMed]
Egeth H. E. Virzi R. A. Garbart H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance, 10, 32–39. [CrossRef] [PubMed]
Enns J. T. Rensink R. A. (1990). Sensitivity to three-dimensional orientation in visual search. Psychological Science, 1, 323–326. [CrossRef]
Epstein W. Babler T. (1990). In search of depth. Perception & Psychophysics, 48, 68–76. [CrossRef] [PubMed]
Gogel W. C. (1969). The sensing of retinal size. Vision Research, 9, 1079–1094. [CrossRef] [PubMed]
Gurnsey R. Humphrey G. K. Kapitan P. (1992). Parallel discrimination of subjective contours defined by offset gratings. Perception & Psychophysics, 52, 263–276. [CrossRef] [PubMed]
Hansen C. H. Hansen R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality & Social Psychology, 54, 917–924. [CrossRef]
He J. J. Nakayama K. (1992). Surfaces vs. features in visual search. Nature, 359, 231–233. [CrossRef] [PubMed]
He Z. J. Nakayama K. (1995). Visual attention to surfaces in three-dimensional space. Proceedings of the National Academy of Sciences of the United States America, 92, 11155–11159. [CrossRef]
Hershler O. Hochstein S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45, 1707–1724. [CrossRef] [PubMed]
Hershler O. Hochstein S. (2006). With a careful look: Still no low-level confound to face pop-out. Vision Research, 46, 3028–3035. [CrossRef] [PubMed]
Kaptein N. A. Theeuwes J. Van der Heijden A. H. C. (1994). Search for a conjunctively defined target can be selectively limited to a color-defined subset of elements. Journal of Experimental Psychology: Human Perception and Performance, 21, 1053–1069. [CrossRef]
Kleffner D. A. Ramachandran V. S. (1992). On the perception of shape from shading. Perception & Psychophysics, 52, 18–36. [CrossRef] [PubMed]
Li X. Cave K. Wolfe J. M. (2008). Kanisza-style subjective contours do not guide attentional deployment in visual search but line termination contours do. Perception & Psychophysics, 70, 477–488. [CrossRef] [PubMed]
Lindsey D. T. Brown A. M. Reijnen E. Rich A. N. Kuzmova Y. Wolfe J. M. (2010). Color channels, not color appearance or color categories, guide visual search for desaturated color targets. Psychology Science, first published online on August 16, 2010.
Moore C. M. Brown L. E. (2001). Preconstancy information can influence visual search: The case of lightness constancy. Journal of Experimental Psychology: Human Perception and Performance, 27, 178–195. [CrossRef] [PubMed]
Nakayama K. He J. J. (1995). Attention to surfaces: Beyond a Cartesian understanding of focal attention. In Papathomas T. V. (Ed.), Attention to surfaces: Beyond a Cartesian understanding of visual attention. Cambridge, MA: MIT Press.
Nothdurft H. C. (1993). Faces and facial expression do not pop-out. Perception, 22, 1287–1298. [CrossRef] [PubMed]
Nothdurft H. C. (2000). Salience from feature contrast: Variations with texture density. Vision Research, 40, 3181–3200. [CrossRef] [PubMed]
Purcell D. G. Stewart A. L. (1988). The face-detection effect: Configuration enhances detection. Perception & Psychophysics, 43, 355–366. [CrossRef] [PubMed]
Purcell D. G. Stewart A. L. Skov R. B. (1996). It takes a confounded face to pop out of a crowd. Perception, 25, 1091–1108. [CrossRef] [PubMed]
Ramachandran V. S. (1988). Perception of shape from shading. Nature, 331, 163–165. [CrossRef] [PubMed]
Senkowski D. Rottger S. Grimm S. Foxe J. Herrmann C. (2005). Kanizsa subjective figures capture visual spatial attention: Evidence from electrophysiological and behavioral data. Neuropsychologia, 43, 872–886. [CrossRef] [PubMed]
Sharan L. Rosenholtz R. E. Adelson E. H. (submitted for publication). Material perception in real-world images is fast and accurate.
Snowden R. J. (1998). Texture segregation and visual search: A comparison of the effects of random variations along irrelevant dimensions. Journal of Experimental Psychology: Human Perception and Performance, 24, 1354–1367. [CrossRef] [PubMed]
Sun J. Perona P. (1998). Where is the sun? Nature Neuroscience, 1, 183–184. [CrossRef] [PubMed]
Thornton T. L. Gilden D. L. (2007). Parallel and serial process in visual search. Psychology Reviews, 114, 71–103. [CrossRef]
Treisman A. (1985). Preattentive processing in vision. Computer Vision, Graphics, and Image Processing, 31, 156–177. [CrossRef]
Treisman A. (1986a). Features and objects in visual processing. Scientific American, 255, 114B–125. [CrossRef]
Treisman A. (1986b). Properties, parts, and objects. In Boff, K. R. Kaufmann L. Thomas J. P. (Eds.), Handbook of human perception and performance (1st ed., vol. 2, pp. 35.31–35.70) New York: John Wiley and Sons.
Treisman A. (1988). Features and objects: The 14th Bartlett memorial lecture. Quarterly Journal of Experimental Psychology, 40, 201–237. [CrossRef] [PubMed]
Treisman A. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [CrossRef] [PubMed]
VanRullen R. (2006). On second glance: Still no high-level pop-out effect for faces. Vision Research, 46, 3017–3027. [CrossRef] [PubMed]
Vickery T. J. King L.-W. Jiang Y. (2005). Setting up the target template in visual search. Journal of Vision, 5, (1):8, 81–92, http://www.journalofvision.org/content/5/1/8, doi:10.1167/5.1.8. [PubMed] [Article] [CrossRef]
Von der Heydt R. Peterhans E. Baumgartner G. (1984). Illusory contours and cortical neuron responses. Science, 224, 1260–1262. [CrossRef] [PubMed]
Williams L. G. (1966). The effect of target specification on objects fixed during visual search. Perception & Psychophysics, 1, 315–318. [CrossRef]
Wolfe J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1, 202–238. [CrossRef] [PubMed]
Wolfe J. M. (1998a). Visual search. In Pashler H. (Ed.), Attention (pp. 13–74). Hove, East Sussex, UK: Psychology Press.
Wolfe J. M. (1998b). What do 1,000,000 trials tell us about visual search? Psychological Science, 9, 33–39. [CrossRef]
Wolfe J. M. (2007). Guided search 4.0: Current progress with a model of visual search. In Gray W. (Ed.), Integrated models of cognitive systems (pp. 99–119). New York: Oxford.
Wolfe J. M. Birnkrant R. S. Kunar M. A. Horowitz T. S. (2005). Visual search for transparency and opacity: Attentional guidance by cue combination? Journal of Vision, 5, (3):9, 257–274, http://www.journalofvision.org/content/5/3/9, doi:10.1167/5.3.9. [PubMed] [Article] [CrossRef]
Wolfe J. M. Cave K. R. Franzel S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433. [CrossRef] [PubMed]
Wolfe J. M. DiMase J. S. (2003). Do intersections serve as basic features in visual search? Perception, 32, 645–656. [CrossRef] [PubMed]
Wolfe J. M. Friedman-Hill S. R. Stewart M. I. O'Connell K. M. (1992). The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18, 34–49. [CrossRef] [PubMed]
Wolfe J. M. Horowitz T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5, 495–501. [CrossRef] [PubMed]
Wolfe J. M. Palmer E. M. Horowitz T. S. (2010). Reaction time distributions constrain models of visual search. Vision Research, 50, 1304–1311. [CrossRef] [PubMed]
Figure 1
 
It is quite easy to identify the material in the top panel as “glass” and in the bottom panel as metal. Stimuli from Sharan et al. (submitted for publication).
Figure 1
 
It is quite easy to identify the material in the top panel as “glass” and in the bottom panel as metal. Stimuli from Sharan et al. (submitted for publication).
Figure 2
 
Reaction time × set size functions for the four conditions of Experiment 1. Error bars are 1 SEM. Green lines show target-present trials. Red lines show target-absent trials. Closed symbols show plastic target data. Open symbols show stone target data.
Figure 2
 
Reaction time × set size functions for the four conditions of Experiment 1. Error bars are 1 SEM. Green lines show target-present trials. Red lines show target-absent trials. Closed symbols show plastic target data. Open symbols show stone target data.
Figure 3
 
Error rates for Experiment 1. Error bars are 1 SEM.
Figure 3
 
Error rates for Experiment 1. Error bars are 1 SEM.
Figure 4
 
Search for feather among wood.
Figure 4
 
Search for feather among wood.
Figure 5
 
RT × set size functions for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles (green) show search for stone among fur, filled circles (purple) show fur among water, and filled squares (blue) show feather among wood. Error bars show 1 SEM.
Figure 5
 
RT × set size functions for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles (green) show search for stone among fur, filled circles (purple) show fur among water, and filled squares (blue) show feather among wood. Error bars show 1 SEM.
Figure 6
 
Error data for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles show search for stone among fur, filled circles show fur among water, and filled squares show feather among wood. Error bars are 1 SEM.
Figure 6
 
Error data for the three conditions of Experiment 2. Solid lines show target-present data. Dashed lines show target-absent data. Open circles show search for stone among fur, filled circles show fur among water, and filled squares show feather among wood. Error bars are 1 SEM.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×