Free
Research Article  |   May 2009
Asymmetric interference between the perception of shape and the perception of surface properties
Author Affiliations
  • Jonathan S. Cant
    CIHR Group on Action and Perception, University of Western Ontario,London, Ontario, Canada
    Neuroscience Program, University of Western Ontario, London, Ontario, Canada
    Department of Psychology, University of Western Ontario, London, Ontario, Canadajcant@uwo.ca
  • Melvyn A. Goodale
    CIHR Group on Action and Perception, University of Western Ontario,London, Ontario, Canada
    Neuroscience Program, University of Western Ontario, London, Ontario, Canada
    Department of Psychology, University of Western Ontario, London, Ontario, Canadahttp://psychology.uwo.ca/faculty/goodalemgoodale@uwo.ca
Journal of Vision May 2009, Vol.9, 13. doi:https://doi.org/10.1167/9.5.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jonathan S. Cant, Melvyn A. Goodale; Asymmetric interference between the perception of shape and the perception of surface properties. Journal of Vision 2009;9(5):13. https://doi.org/10.1167/9.5.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We previously showed that the processing of shape and the processing of surface properties linked to material properties engage different regions of the ventral stream (J. S. Cant & M. A. Goodale, 2007). Moreover, we recently used Garner's speeded-classification task to show that varying the surface (material) properties of objects does not interfere with shape judgments and vice versa (J. S. Cant, M. E. Large, L. McCall, & M. A. Goodale, 2008). In the present study, we looked at Garner interference when surface cues contributed to the perception of object shape and hypothesized that this would interfere with judgments about the width and the length of the objects. In contrast, we predicted that varying the width and the length of the objects would not interfere with surface-property judgments. This is precisely what we found. These results suggest that the shape and the surface properties of an object cannot be processed independently when both these sets of cues are linked to the perception of the object's overall shape. These observations, together with our previous findings, suggest that the surface cues that contribute to object shape are processed quite separately from the surface cues that are linked to an object's material properties.

Introduction
Object recognition is a seemingly effortless skill that is accomplished through the processing of many different object attributes, such as shape, color, texture, motion, and luminance patterns. By studying individuals with selective lesions to particular areas of the visual cortex, cognitive neuroscientists have gained an understanding of where the different attributes are processed. For example, patients who have damage localized to the lateral regions of the ventral stream such as the lateral occipital area (area LO) can show visual form agnosia; they are unable to perceive the shape or form of objects but can nevertheless perceive surface properties such as color and texture (Goodale & Milner, 2004; Humphrey, Goodale, Jakobson, & Servos, 1994; James, Culham, Humphrey, Milner, & Goodale, 2003; Milner et al., 1991). Conversely, patients with damage to medial regions of the ventral stream such as the fusiform and lingual gyri often show cerebral achromatopsia; they cannot perceive the color of objects but can typically perceive their shape (Duvelleroy-Hommet et al., 1997; Heywood, Gaffan, & Cowey, 1995; for review, see Heywood & Kentridge, 2003). The double dissociation of spared and compromised visual abilities (and the lesions responsible for these behavioral observations) in visual form agnosia and cerebral achromatopsia present striking evidence for the notion that there are separate shape and surface-property pathways in the primate visual system. 
Inspired by the neuropsychological observations discussed above, we conducted several functional magnetic resonance imaging (fMRI) experiments with healthy volunteers and have shown that the processing of shape and the processing of surface properties (i.e., color and texture) engage anatomically distinct regions of the ventral stream (Cant, Arnott, & Goodale, 2009; Cant & Goodale, 2007). Specifically, the processing of shape selectively recruited lateral regions of the ventral stream such as area LO, whereas the processing of surface properties (particularly texture) selectively recruited more medial regions, such as the collateral sulcus (CoS). The results from these neuroimaging experiments provide further evidence for the notion that there are separate pathways in the ventral stream for the processing of shape and the processing of surface properties. 
However, establishing that distinct cortical regions process shape and surface properties does not necessarily imply that these regions function independently during object recognition. To establish true functional independence, it is necessary to use a sensitive behavioral measure, which assesses whether the processing of one attribute interferes with the processing of another. In this regard, we recently used a behavioral paradigm known as Garner's speeded-classification task (Garner, 1974) to assess whether the processing of shape interferes with the processing of surface properties (Cant, Large, McCall, & Goodale, 2008). We showed that varying the surface properties (i.e., color and texture) of objects does not interfere with shape (i.e., width and length) judgments and vice versa. These results suggest that the mechanisms underlying shape and surface-property processing can function independently during object recognition. 
It is important to emphasize that in both the imaging and the psychophysical studies discussed above, the processing of surface properties that we were studying was linked to the material properties of the objects. For example, in Cant and Goodale (2007), participants, when making judgments about the surface properties of the objects, had to ignore the shape of the objects and concentrate implicitly on what the surface properties told them about the material from which the object was made. In Cant et al. (2008), we used objects such as bricks and pieces of wood that had the same overall shape. In other words, the surface properties were again linked to the material of the target objects rather than to their shape. 
However, surface properties can also provide important cues to an object's shape. For example, psychophysical studies have demonstrated that color contributes not only to the perception of surface chromaticity but also to the perception of object geometry (for review, see Shevell & Kingdom, 2008), and neuropsychological evidence has demonstrated a dissociation between chromaticity from color and form from color (Heywood & Kentridge, 2003; Heywood, Kentridge, & Cowey, 1998). Moreover, numerous psychophysical and neuroimaging studies suggest that object shape can be derived from cues such as texture and shading gradients, even when there are no differences in outline contour (Georgieva, Todd, Peeters, & Orban, 2008; Humphrey et al., 1997; Humphrey, Symons, Herbert, & Goodale, 1996; Kleffner & Ramachandran, 1992; Li & Zaidi, 2000, 2001; Ramachandran, 1988a, 1988b; Tsutsui, Sakata, Naganuma, & Taira, 2002). In this situation, one might expect the processing of surface properties to interfere with the processing of object shape derived from other cues such as the dimensions of the object (e.g., length or width). In other words, when participants are making judgments about an object's length or width, differences in the surface cues that lead to differences in perceived shape would be expected to interfere with these judgments. However, this effect may or may not be reciprocal; in certain situations, differences in the outline contour of an object clearly affect the perception of the surface of that object (Knill & Kersten, 1991; Ramachandran, 1988b; see General discussion section for a more detailed account of these studies). In other situations, however, differences in the contour of an object may contribute little to the perception of that object's surface properties. The various sets of stimuli used in this study are consistent with the latter situation, and we think it is reasonable to predict that participants might be able to make judgments about the surface properties of these stimuli without any interference from changes in the dimensions of the stimuli. In the present study, we used Garner's speeded-classification task to examine the interactions between the processing of an object's shape defined by its dimensions (i.e., width and length) and the processing of an object's shape defined by its surface properties. Thus, unlike our previous behavioral study where surface properties were associated with the material properties of objects (Cant et al., 2008), the surface properties we used in the present study (i.e., texture and shading gradients) were associated with the object's shape. 
Garner's speeded-classification task measures how efficiently people can process one attribute of an object while ignoring its other attributes (Garner, 1974). In a Garner task, participants attend a single attribute of an object under two different conditions. In the “baseline” condition, only the relevant (attended) attribute varies, while another, irrelevant attribute is kept constant. In the “filtering” condition, however, both the relevant and irrelevant attributes vary. If participants are able to process these two attributes independently, then the speed and accuracy of their responses to the relevant attribute should be identical in both the baseline and filtering conditions. If this turns out to be the case, then the two attributes would be classified as separable. Separable attributes include the position of lines and their luminance contrast; participants can discriminate changes in the position of a line while successfully ignoring its luminance contrast and vice versa (Shechter & Hochstein, 1992). If participants cannot process the two attributes independently, however, then the speed and accuracy of their responses to the relevant attribute should be worse in the filtering condition compared to the baseline condition. This is believed to occur because participants are not able to “filter out” the changes in the irrelevant attribute while attending the relevant attribute. In this situation, the two attributes would be classified as integral, and they would be said to show “Garner interference”. Integral attributes include the length of lines and their orientation; varying line orientation interferes with participants' judgments of line length and vice versa (Dick & Hochstein, 1988). 
The present series of experiments were inspired by an initial attempt to find interference between shape and surface properties using stimuli where differences in both shading and texture (simultaneously) led to a percept of 3-D shape. These initial results suggested that it was possible to find interference between shape and surface cues, but since these results were not as reliable as we would have liked, we decided to pursue this potential effect further by using more controlled stimuli (i.e., by looking at the effects of shading and texture separately). We conducted four experiments in the present study. In Experiments 1 and 2, we looked for potential interference between the processing of length (or width) and the processing of surface texture and shading, respectively. Given the psychophysical and neuroimaging evidence for the involvement of texture gradients and shading gradients to the perception of shape, one might expect that varying surface properties would interfere with judgments about an object's dimensions (i.e., length or width). Varying the length or width of an object, however, might not interfere with surface-property judgments because those dimensions do not contribute to the perception of differences in the surface properties of the stimuli used in these experiments. In other words, as discussed earlier, we predicted an asymmetry in the direction of interference in Experiments 1 and 2. In Experiment 3, we investigated whether or not the interactions between shape and shading could be accounted for by low-level image features such as luminance polarity. Specifically, we used horizontal shading gradients that had the same luminance polarity as those used in Experiment 2 but which are known to have a less pronounced effect on the perception of shape (Ramachandran, 1988a, 1988b). We reasoned that using more ambiguous shape-from-shading stimuli such as these might abolish (or at least reduce) the effect of differences in shading on judgments of length or width. Finally, in Experiment 4 we investigated whether or not the orientation of the shading gradients we used contributed to the potential interference between shape and shading. Specifically, variations in vertical shading gradients may have caused interference with shape judgments not because they contributed to 3-D shape per se, but because reversing the polarity of vertical shading gradients is particularly distracting (more distracting than reversing the polarity of horizontal shading gradients). Thus, in this experiment we used shading gradients that had the same orientation as those used in Experiment 2 (i.e., vertical), but unlike the stimuli in that experiment, the difference in luminance polarity of these stimuli did not contribute to the perception of 3-D shape (i.e., the stimuli had a sharp, instead of a gradual, change in luminance polarity). We reasoned that using vertical shading gradients that did not contribute to 3-D shape perception would eliminate any interference between shape and shading observed previously. 
Experiment 1: Interaction between shape and texture
Participants completed two tasks in this experiment. In the main task (Shape–Texture task), participants were required to classify oval objects on the basis of their width (or length), while ignoring the texture (curved lines that gave a convex appearance or straight lines that gave a flat appearance) of those objects, and vice versa. In the control task (Shape-only task), participants were required to classify oval objects on the basis of their width while ignoring their length, and vice versa. In this task, texture never varied. We included the Shape-only task as a control because previous research has demonstrated robust Garner interference between the dimensions of width and length (Cant et al., 2008; Dykes & Cooper, 1978; Felfoldy, 1974; Ganel & Goodale, 2003; Macmillan & Ornstein, 1998). Thus, if we observed interference between width and length in the present study, we could be confident that our stimuli and experimental parameters were well suited to investigating potential interference between the main attributes of interest, which are object dimensions (length or width) and texture. 
Participants' response latencies and accuracy in each classification task were recorded. In the Shape–Texture task, we expected asymmetric interference between judgments of length (or width) and judgments of texture. That is, when classifying objects on the basis of their dimensions, we expected participants to show longer response latencies and commit more errors in the filtering trials (where both the relevant and irrelevant attributes varied) compared to the baseline trials (where only the relevant attribute varied). In contrast, we did not predict this difference in performance to occur across the baseline and filtering trials when participants classified objects on the basis of their texture. In other words, we expected that participants could attend to texture and completely ignore changes in length (or width) but not vice versa. In the Shape-only task, we expected that participants would show longer response latencies and commit more errors in the filtering trials than in the baseline trials. In other words, we expected that they could not ignore changes in length when making width judgements and vice versa. 
Methods
Participants
Thirteen individuals (8 females, 5 males; 12 right-handed, 1 left-handed; mean age = 26.77 years, range = 18–61 years) participated in this experiment. The participants were selected from research assistants, undergraduate students, graduate students, and post-doctoral fellows studying at the University of Western Ontario. Participants had normal, or corrected-to-normal, visual acuity and reported no history of neurological impairment. They received $10 for their participation. All participants gave their informed consent and the experiment was approved by the Review Board for Health Sciences Research involving Human Participants for the University of Western Ontario. 
Stimuli and apparatus
The stimuli used in this experiment were inspired by stimuli used in previous shape-from-texture experiments (see Li & Zaidi, 2003; Zaidi & Li, 2002). Our stimulus set consisted of images of ovals, which varied in their width (or length) and in their texture (see Figure 1). With respect to the dimensions of the objects, the stimuli were presented in two different widths (wide = 35 mm, narrow = 29 mm) and two different lengths (long = 56 mm, short = 50 mm). The stimuli were also presented in two different textures, which were defined in terms of differences in the local orientations of the lines present within the contours of the ovals. One texture was made up of curved lines and appeared convex (labeled as “pattern A” to the participants), and the other texture was made up of straight lines and appeared flat (labeled as “pattern B”). 
Figure 1
 
Examples of the stimuli used in Experiment 1. The stimuli could vary along two shape dimensions (width and length) and could vary between two texture patterns (curved lines that gave a convex appearance or straight lines that gave a flat appearance).
Figure 1
 
Examples of the stimuli used in Experiment 1. The stimuli could vary along two shape dimensions (width and length) and could vary between two texture patterns (curved lines that gave a convex appearance or straight lines that gave a flat appearance).
Participants sat at a desk in a darkened room with their head mounted in a headrest and stimuli were presented on a CRT monitor (1280 × 1024 pixels) located directly in front of them. Stimulus presentation was controlled by Superlab Pro version 2.0.4 (Cedrus Corporation, San Pedro, CA). Stimuli were always presented on the center of the computer screen and the distance from the participants' eyes to the screen was approximately 40 cm. When classifying the stimuli based on the dimensions and textures listed above, participants responded by pressing either the “1” or “3” key on the number pad of the keyboard with their right index or ring finger, respectively. Response latency and accuracy measures were recorded by the Superlab Pro software. During the 2000-ms intertrial interval, a small fixation cross was presented in the center of the screen. 
Procedure
All participants completed both the Shape-only task and the Shape–Texture task. The order of these tasks was counterbalanced across participants. In the Shape-only task, there were two conditions. In the width condition, participants classified the stimuli based on their width (i.e., “wide” vs. “narrow”); in the length condition, participants classified the stimuli on the basis of their length (i.e., “long” vs. “short”). The order of these conditions was counterbalanced across participants. The texture of the stimuli in the Shape-only task was kept constant for each participant (but each of the two textures was used roughly equally across all participants). In the Shape–Texture task, there were also two conditions. In the shape condition of this task, half the participants classified each stimulus on the basis of its width and half on the basis of its length. In the texture condition, the participants classified each stimulus on the basis of its surface texture (i.e., “pattern A” vs. “pattern B”). The different combinations of shape and texture cues were counterbalanced across participants in the Shape–Texture task. In other words, the two possible combinations, width–texture and length–texture, were almost equally represented, with one combination being randomly assigned to each participant. Thus, in the Shape–Texture task, each participant made both shape judgments and texture judgments. Again the order of presentation of the two conditions was counterbalanced across subjects. Prior to starting the experiment, participants were shown a display containing examples of the stimuli that they were going to encounter during the testing session. To ensure that participants perceived a stable “shape-from-texture” effect, each participant was asked to describe what they were viewing on the computer screen. All participants reported seeing the ovals with curved lines as convex and the ovals with straight lines as flat. 
Before starting each condition of each task, participants were given 20 practice trials to become familiar with the task. The participant's task upon presentation of the stimulus on any given trial was to classify that stimulus as quickly and accurately as possible. Verbal feedback was provided where necessary. The stimulus remained on the computer screen until a response was made. Immediately following the response, there was a 2000-ms interval until the presentation of the next stimulus. Participants were instructed to maintain fixation on the cross during this interval. For the experiment proper, each task consisted of eight blocks of trials. In each condition of each task, participants classified stimuli in four separate blocks of 32 trials each. Two of the four blocks of trials served as baseline blocks (where only the relevant attribute varied), while the other two blocks of trials served as filtering blocks (where both the relevant and irrelevant attributes varied). The order of presentation of the four blocks was counterbalanced across conditions and participants. For each block of 32 trials, the two possible responses for the relevant attribute (e.g., “wide” or “narrow” in the case of width) were presented an equal number of times in pseudorandom order. Half of the participants pressed “1” for wide and “3” for narrow, for example, and the other half of the participants had these button assignments reversed. Again the assignment of the response buttons was counterbalanced across participants and conditions. An instruction screen separated each block informing participants that they could take a short break if they desired, and reminding them to respond as quickly and accurately as possible in the next block of trials. Each participant completed 512 trials during the entire experimental session (32 trials × 4 blocks × 2 conditions × 2 tasks = 512 trials). 
Results
Because of the inherent differences in design between the two tasks, separate analyses of variance were conducted for the Shape-only and the Shape–Texture tasks. Response latencies (for correct trials only) and the number of errors committed were analyzed in both cases using a 2 × 2 repeated-measures analysis of variance, alpha = 0.05. Main effects of interest included Condition (width and length for the Shape-only task; shape and texture for the Shape–Texture task) and Block type (baseline and filtering). Pairwise post-hoc comparisons were performed using alpha = 0.05. An outlier analysis was performed, and response latencies that were 2.5 standard deviations above or below the mean reaction time for each condition in each task were excluded from the analysis. An outlier analysis was not performed on the number of errors committed. 
Shape-only task
The main effect of condition (width: mean [ M] = 556 ms, standard error of the mean [ SEM] = 27; length: M = 625 ms, SEM = 30) for response latency was significant, F(1,12) = 21.69, p < 0.001, mean square error ( MSE) = 2887.85. The main effect of block type (baseline: M = 574 ms, SEM = 25; filtering: M = 607 ms, SEM = 30) was also significant, F(1, 12) = 11.82, p < 0.005, MSE = 1188.07. The interaction between condition and block type, however, was not significant, F(1, 12) = 1.15, p > 0.30, MSE = 498.48. Prior to conducting the experiment, we made a priori predictions that response latencies in the baseline and filtering blocks would differ significantly from each other in both the width and length conditions. Thus, to confirm our predictions, we carried out pairwise post-hoc comparisons to investigate the differences between baseline and filtering blocks in each condition. As we predicted, response latencies in the baseline blocks of the width condition ( M = 542 ms, SEM = 25) were significantly faster than the response latencies in the filtering blocks ( M = 569 ms, SEM = 29), t(12) = 2.30, p < 0.05 (see Figure 2A). Similarly, response latencies in the baseline blocks ( M = 605 ms, SEM = 28) of the length condition were significantly faster than those in the filtering blocks ( M = 645 ms, SEM = 32), t(12) = 3.48, p < 0.005. 
Figure 2
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, texture irrelevant; texture: texture relevant, width or length irrelevant) from Experiment 1. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Texture task (width or length and texture judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 1 (ms = milliseconds).
Figure 2
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, texture irrelevant; texture: texture relevant, width or length irrelevant) from Experiment 1. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Texture task (width or length and texture judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 1 (ms = milliseconds).
In the analysis on the number of errors committed, the main effect of condition (width: M = 4.50, SEM = 0.75; length: M = 9.39, SEM = 1.13) was significant, F(1,12) = 19.13, p < 0.001, MSE = 16.22. The main effect of block type (baseline: M = 5.85, SEM = 0.84; filtering: M = 8.04, SEM = 1.07), however, did not reach significance, F(1,12) = 3.74, p = 0.077, MSE = 16.69. The interaction between condition and block type was also not significant, F(1,12) = 0.17, p > 0.69, MSE = 14.12. Again, based on our initial predictions, we conducted post-hoc pairwise comparisons to investigate the interaction in greater detail (see Figure 2B). Similar to the findings with response latency, participants committed fewer errors in the baseline blocks ( M = 3.62, SEM = 0.67) of the width condition compared to the filtering blocks ( M = 5.39, SEM = 1.05), but this result did not reach significance, t(12) = 1.91, p = 0.081. In the length condition, participants also committed fewer errors in the baseline blocks ( M = 8.08, SEM = 1.25) compared to the filtering blocks ( M = 10.69, SEM = 1.71), but, like the results from the width condition, this difference was not significant, t(12) = 1.33, p > 0.20. 
Shape–texture task
For response latencies, the main effect of condition (shape: M = 570 ms, SEM = 27; texture: M = 512 ms, SEM = 20) was significant, F(1,12) = 13.13, p < 0.003, MSE = 3348.94. The main effect of block type (baseline: M = 531 ms, SEM = 22; filtering: M = 550 ms, SEM = 24) was also significant, F(1,12) = 5.65, p < 0.04, MSE = 761.81. The interaction between condition and block type, however, did not reach significance, F(1,12) = 2.43, p > 0.14, MSE = 488.44. In this task, we predicted that we would find asymmetric interference between shape and texture. That is, we predicted that we would find a significant difference in response latencies between the baseline and filtering blocks in the shape condition but not in the texture condition. Thus, to confirm these predictions, we conducted pairwise post-hoc comparisons between the baseline and filtering blocks in both the shape and texture conditions (see Figure 2A). Just as we predicted, we found that response latencies in the baseline and filtering blocks of the shape condition (baseline: M = 556 ms, SEM = 27; filtering: M = 584 ms, SEM = 29) differed significantly from each other, t(12) = 2.24, p < 0.05. Moreover, the differences in response latencies in the baseline and filtering blocks of the texture condition (baseline: M = 507 ms, SEM = 19; filtering: M = 516 ms, SEM = 21) did not reach significance, t(12) = 1.38, p > 0.19, in line with our initial prediction. 
The analysis on the number of errors committed yielded a significant main effect of condition (shape: M = 4.23, SEM = 1.07; texture: M = 1.39, SEM = 0.50), F(1,12) = 6.33, p < 0.03, MSE = 16.64. The main effect of block type, however, was not significant (baseline: M = 2.62, SEM = 0.65; filtering: M = 3.00, SEM = 0.65; F(1,12) = 1.05, p > 0.32, MSE = 1.84). In contrast, the condition-by-block type interaction did reach significance, F(1,12) = 5.57, p < 0.04, MSE = 2.33. Using the same logic outlined in the analysis on response latencies above, we conducted a simple main effects analysis to investigate the significant interaction in greater detail (see Figure 2B). Participants committed fewer errors in the baseline blocks ( M = 3.54, SEM = 1.14) of the shape condition compared to the filtering blocks ( M = 4.92, SEM = 1.12), but this result did not reach significance, t(12) = 1.95, p = 0.076. Moreover, no significant differences were observed on the number of errors committed in the baseline and filtering blocks for the texture condition (baseline: M = 1.69, SEM = 0.60; filtering: M = 1.08, SEM = 0.46; t(12) = 1.67, p > 0.12). 
Discussion
Just as we predicted, in the Shape-only task, we found that the two components of shape (width and length) acted as integral dimensions. Varying the length of an object (between long and short) interfered with participant's width judgments (classifying objects as wide or narrow), and varying the width of an object interfered with their length judgments. Specifically, participants were better at making width or length judgments when only the relevant attribute varied in the baseline blocks. Their performance deteriorated in the filtering blocks when both the relevant and irrelevant attributes varied. These results add to a large body of literature, which suggests that object shape is perceived holistically (e.g., Cant et al., 2008; Ganel & Goodale, 2003). It is extremely difficult, if not impossible, to attend to one dimension, such as width, while ignoring changes in another, such as length. With this replication of the standard finding in the Garner literature, we can be quite confident that the stimuli and apparatus employed in this experiment are well suited to investigate potential interference between changes in object dimensions (width and length) and changes in surface texture. 
In the Shape–Texture task, we found an asymmetric pattern of interference. Varying texture significantly interfered with participants' shape judgments, in that participants were faster to classify changes in width or length in the baseline blocks where texture remained constant, compared to the filtering blocks where texture randomly varied. We should note that this effect on performance was present only in participants' response latencies and was not observed in the number of errors they committed (and this was also true of the results in the Shape-only task). This presumably reflects the fact that in this experiment (and in general), response latency is a much more sensitive measure than error rate, as the error rate was so low and variable. 
In contrast to the effects of texture on judgments of width or length, varying width or length did not interfere with texture judgments. The speed and accuracy of participants' responses when classifying texture were quite similar in both the baseline and filtering blocks. In other words, participants were able to process texture independently of changes in shape. These results suggest that in Garner's language, the processing of changes in object shape and the processing of changes in surface texture are only partially separable. On the one hand, when surface texture contributes to the perception of shape, as it most certainly does here, these attributes share common processing resources. On the other hand, when shape contributes little to the perception of texture, these attributes do not share common processing resources. This is likely the reason why we found no interference between these attributes when participants were focusing their attention on differences in texture (we do not mean to imply that shape never contributes to surface-property processing, but simply point out that this relationship was either not present in the stimuli used in this experiment, or was not readily apparent because of the particular task that we used; see General discussion section for more details). 
On the face of it, these results differ from our previous findings where we found no interference between shape and surface properties (Cant et al., 2008). However, as mentioned previously, in that study, the surface properties (i.e., color and texture) were clearly associated with the material from which the objects were made (i.e., brick and wood) and had no obvious connection to shape. As such, it is perhaps not surprising that we found no interference between shape and surface properties in that experiment, as the perception of one attribute was not related to the perception of the other. [Indeed, we have also shown that the processing of shape and the processing of surface properties (related to material) engage anatomically distinct regions in occipito-temporal cortex, a finding that converges nicely with our previous behavioral findings (Cant et al., 2009; Cant & Goodale, 2007; Cant et al., 2008).] In the present study, however, we used stimuli where variations in surface texture were associated more with variations in shape and reasoned that these differences in surface texture would interfere with judgments of length or width in Garner's speeded-classification task. Indeed, this is precisely what we found, and this pattern of interference converges nicely with the results of previous studies demonstrating that texture gradients can be used to construct representations of object shape (Georgieva et al., 2008; Li & Zaidi, 2000, 2001; Todd, Norman, Koenderink, & Kappers, 1997; Tsutsui et al., 2002). 
Having established that variations in texture gradients can interfere with shape judgments, we conducted a second experiment to provide converging evidence for the notion that surface properties in general, when connected to the perception of shape, can interfere with shape judgments. Thus, in Experiment 2, we again looked for asymmetric interference between changes in object shape and changes in surface properties, but this time used shading gradients rather than texture gradients. There is an extensive literature showing that shading can contribute to the perception of shape (Humphrey et al., 1997, 1996; Kleffner & Ramachandran, 1992; Ramachandran, 1988a, 1988b), and the differences in shading that we used in Experiment 2 did indeed create a percept of 3-D shape (see Figure 3). 
Figure 3
 
Examples of the stimuli used in Experiment 2. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom and appeared convex, or darker on the top and appeared concave).
Figure 3
 
Examples of the stimuli used in Experiment 2. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom and appeared convex, or darker on the top and appeared concave).
Experiment 2: Interaction between shape and shading
A separate set of participants completed the same tasks that the participants in Experiment 1 completed, with the exception that shading gradients were used, rather than texture gradients. In the Shape-only task, we again predicted that width and length would interfere with each other. In the Shape–Shading task, participants were instructed to classify oval stimuli based on their width (or length) while ignoring their shading (vertical shading gradients, which were darker on the bottom or darker on the top), and vice versa. Based on the visual system's assumption that there is a single light source in the scene, which shines from above, ovals that are darker on the bottom appear convex, and conversely, ovals that are darker on the top appear concave (Ramachandran, 1988a, 1988b). This difference in shading leads to a profound perception of 3-D shape when comparing the two types of ovals. Because of the strength of this percept, we expected to find an asymmetric pattern of interference between width (or length) and shading in the present experiment, just as we did for width (or length) and texture in Experiment 1. To reiterate, varying shading should affect participants' performance when classifying differences in width or length, but varying width or length should not affect participants' performance when classifying shading. 
Participants
Twelve individuals (including author JSC; 5 females, 7 males; 11 right-handed, 1 left-handed; mean age = 22.60 years, range = 18–32 years) participated in this experiment. The participants were selected using the same inclusion–exclusion criteria described for Experiment 1
Stimuli and apparatus
Stimuli used in this experiment consisted of images of ovals, which varied in their shape and shading (see Figure 3). The dimensions of the ovals were identical to those used in Experiment 1. The stimuli were also presented in two different shading patterns. One vertical shading gradient was darker on the bottom (labeled as “bottom dark” to participants) and thus appeared convex, and the other vertical shading gradient was darker on the top (labeled as “top dark”) and thus appeared concave. 
The apparatus used in this experiment was identical to that used in Experiment 1
Procedure
All participants completed both the Shape-only task and the Shape–Shading task. The Shape-only task of this experiment was identical to the Shape-only task of Experiment 1 and will therefore not be described in any more detail. Similarly, the Shape–Shading task of this experiment was identical to the Shape–Texture task of Experiment 1, with the exception that participants classified variations in shading gradients (i.e., “bottom dark” vs. “top dark”) instead of variations in texture gradients. Similar to the procedure in Experiment 1, participants were shown examples of the stimuli prior to starting the experiment, and to ensure that participants perceived a stable “shape-from-shading” effect, each participant was asked to describe what they were viewing on the computer screen. All participants reported seeing the ovals with dark bottoms as convex, and the ovals with dark tops as concave. All other procedures for this experiment were identical to those in Experiment 1
Results
The analysis used in this experiment was identical to the one used in Experiment 1
Shape-only task
The main effect of condition (width: M = 555 ms, SEM = 23; length: M = 585 ms, SEM = 16) for response latency was not significant, F(1,11) = 1.71, p > 0.21, MSE = 6184.41. In contrast, the main effect of block type (baseline: M = 548 ms, SEM = 17; filtering: M = 592 ms, SEM = 18) was significant, F(1, 11) = 11.31, p < 0.006, MSE = 2114.53. The interaction between condition and block type was not significant, F(1, 11) = 0.54, p > 0.47, MSE = 835.88. Similar to Experiment 1, post-hoc pairwise comparisons showed that response latencies in the baseline blocks ( M = 530 ms, SEM = 23) of the width condition were significantly faster than the response latencies in the filtering blocks ( M = 581 ms, SEM = 24), t(11) = 4.02, p < 0.002 (see Figure 4A). Similarly, response latencies in the baseline blocks ( M = 566 ms, SEM = 17) of the length condition were faster than those in the filtering blocks ( M = 604 ms, SEM = 20), and this difference approached significance, t(11) = 2.11, p = 0.058. 
Figure 4
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 2. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). ** p < 0.01; ^ p = 0.058. (B) Results for the number of errors committed in both tasks of Experiment 2. * p < 0.05; ms = milliseconds.
Figure 4
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 2. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). ** p < 0.01; ^ p = 0.058. (B) Results for the number of errors committed in both tasks of Experiment 2. * p < 0.05; ms = milliseconds.
In the analysis on the number of errors committed, the main effect of condition (width: M = 2.21, SEM = 0.53; length: M = 6.25, SEM = 1.52) reached significance, F(1, 11) = 8.11, p < 0.02, MSE = 24.16. Although this was the only effect to reach significance, both the main effect of block type (baseline: M = 3.79, SEM = 0.87; filtering: M = 4.67, SEM = 0.97; F(1, 11) = 3.81, p = 0.077, MSE = 2.42) and the condition-by-block type interaction approached significance ( F(1,11) = 3.84, p = 0.076, MSE = 3.96). Because we expected to find significant differences between the baseline and filtering blocks in each condition, we carried out pairwise comparisons to investigate the interaction between condition and block type further (see Figure 4B). The difference between the number of errors committed by participants in the baseline ( M = 2.33, SEM = 0.72) and filtering blocks ( M = 2.08, SEM = 0.50) of the width condition was not significant, t(11) = 0.39, p > 0.70. In the length condition, participants committed significantly fewer errors in the baseline blocks ( M = 5.25, SEM = 1.34) compared to the filtering blocks ( M = 7.25, SEM = 1.78), t(11) = 2.48, p < 0.04. 
Shape–shading task
For response latencies, the main effect of condition (shape: M = 516 ms, SEM = 16; shading: M = 445 ms, SEM = 18) was significant, F(1,11) = 8.35, p < 0.02, MSE = 7383.94. The main effect of block type (baseline: M = 474 ms, SEM = 12; filtering: M = 486 ms, SEM = 13) approached significance, F(1,11) = 4.03, p = 0.07, MSE = 463.70. Finally, the interaction between condition and block type reached significance, F(1,11) = 5.79, p < 0.04, MSE = 409.07. Next, we conducted pairwise tests to shed light on the precise pattern of responses in the baseline and filtering blocks of the shape and shading conditions (see Figure 4A). As it turns out, response latencies in the baseline ( M = 503 ms, SEM = 17) and filtering blocks ( M = 530 ms, SEM = 16) differed significantly in the shape condition, t(11) = 3.21, p < 0.008, but this difference was not significant in the shading condition (baseline: M = 445 ms, SEM = 19; filtering: M = 444 ms, SEM = 19; t(11) = 0.18, p > 0.85). 
The analysis on the number of errors yielded no significant results for the main effects of condition (shape: M = 2.42, SEM = 0.71; shading: M = 1.71, SEM = 0.61; F(1,11) = 0.63, p > 0.40, MSE = 9.57), block type (baseline: M = 1.92, SEM = 0.51; filtering: M = 2.21, SEM = 0.49; F(1,11) = 2.15, p > 0.17, MSE = 0.48), and the condition-by-block type interaction, F(1,11) = 0.11, p > 0.74, MSE = 1.64. Moreover, no significant differences were observed on the number of errors committed in the baseline and filtering blocks for both the shape condition (baseline: M = 2.33, SEM = 0.80; filtering: M = 2.50, SEM = 0.69; t(11) = 0.35, p > 0.73) and the shading condition (baseline: M = 1.50, SEM = 0.65; filtering: M = 1.92, SEM = 0.62; t(11) = 1.16, p > 0.26; see Figure 4B). 
Discussion
The results of Experiment 2 again provide evidence that width and length share common processing resources and are integral dimensions of object shape. Varying length affected participants' performance on a width discrimination task (i.e., longer response latencies in the filtering blocks compared to the baseline blocks), and varying width affected participants' performance on a length discrimination task. While differences in response latencies in the width condition were highly significant ( p < 0.002), differences in latency in the length condition only approached significance ( p = 0.058). As it turns out, this difference in the strength of the Garner effect between width and length was observed in the majority of the experiments conducted in our previous study using the Garner task (Cant et al., 2008)—and also, as we shall see, in Experiment 3. There may be a number of interacting factors at work here, including differences in the JNDs of length and width (by definition), the horizontal–vertical illusion in which the same line segment is perceived as being longer when oriented vertically than when it is oriented horizontally (Lipshits, McIntyre, Zaoui, Gurfinkel, & Berthoz, 2001; Prinzmetal & Gettleman, 1993), and/or basic asymmetries in the effects of varying one dimension on the perception of the other. However, such speculation has to be tempered by the fact that the direction of this asymmetry was reversed in the results of the Shape-only task from Experiment 1 (and was absent in Experiment 4). Whatever might be going on here, it is important to emphasize that changes in one dimension have typically been found to affect judgments of the other (e.g., Felfoldy, 1974; Ganel & Goodale, 2003). 
Importantly, the results of Experiment 2 demonstrate a clear asymmetric pattern of interference between shape defined by variations in width or length and shape from shading. That is, varying shading interfered with participants' performance on both a width- and a length-discrimination task, but varying length or width had no effect on participants' performance on the shading-discrimination task. This most certainly reflects the fact that the shading gradients were used to construct a percept of overall shape, but overall shape (i.e., width or length) was not being used to determine the direction of the shading gradient (because of the stimuli that we used or because of task demands; see General discussion section). These findings are in agreement with numerous studies that have demonstrated that shading can contribute to the perception of 3-D shape (Humphrey et al., 1997, 1996; Kleffner & Ramachandran, 1992; Ramachandran, 1988a, 1988b). At some level of visual analysis, therefore, the attributes of width, length, and shape from shading must share common processing resources. 
Just as we predicted, the asymmetric pattern of interference observed between changes in object shape and changes in shading in this experiment was similar to the interference observed between shape and texture in Experiment 1. This is encouraging and reinforces our idea that in some situations, the perception of shape derived from changes to an object's dimensions and the perception of shape derived from changes to its surface properties can interfere with each other. We believe this interference was observed in the present experiment because the differences in the shading gradients used created a profound perception of 3-D shape. We cannot, however, rule out the possibility that this interference could be accounted for by differences in low-level image features such as the luminance polarity of the stimuli used. To rule out this alternative account, we conducted a third (control) experiment. 
Experiment 3: Investigating luminance polarity
A separate set of participants was tested with the same stimuli as those used in Experiment 2, but this time the shading gradients were rotated 90° clockwise (see Figure 5). In the Shape-only task, we again predicted that width and length would interfere with each other. 
Figure 5
 
Examples of the stimuli used in Experiment 3, in which the stimuli used in Experiment 2 were rotated by 90° in a clockwise direction (thus, width was now the vertical dimension, and length was now the horizontal dimension). The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (horizontal shading gradients that were darker on the left or darker on the right). Neither shading pattern led to a profound impression of concavity or convexity.
Figure 5
 
Examples of the stimuli used in Experiment 3, in which the stimuli used in Experiment 2 were rotated by 90° in a clockwise direction (thus, width was now the vertical dimension, and length was now the horizontal dimension). The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (horizontal shading gradients that were darker on the left or darker on the right). Neither shading pattern led to a profound impression of concavity or convexity.
In the Shape–Shading task, participants classified oval stimuli based on their width (or length) while ignoring their shading (horizontal shading gradients, which were darker on the left or darker on the right), and vice versa. Based on the assumption that there is a single light source in the scene that shines from above, the perception of shape derived from ovals with horizontal shading gradients can be quite ambiguous (Ramachandran, 1988a, 1988b). That is, an oval with a horizontal shading gradient that is darker on the left, for example, can be perceived as having either a convex or a concave shape, or the shading may not contribute to the perception of object shape whatsoever. Thus, when compared with vertical shading gradients, horizontal shading gradients produce a much less profound perception of 3-D shape, if they produce a perception of 3-D shape at all (to be convinced, the reader might want to compare the stimuli in Figures 3 and 5). As such, we predicted that width (or length) and shading would not interfere with each other in this experiment. Note that despite the difference in the orientation of the shading gradients used in Experiments 2 and 3, all of the stimuli possessed the same differences in luminance polarity. Thus, a lack of interference in the present experiment would suggest that the interference between object shape and shading observed in Experiment 2 could not be accounted for by differences in low-level image features such as luminance polarity, and instead likely reflects the fact that the pattern of shading on the surface of the ovals contributed to the perception of shape. 
Participants
Twelve participants (8 females, 4 males; 11 right-handed, 1 left-handed; mean age = 33.67 years, range = 21–52 years) participated in this experiment. The participants were selected using the same inclusion–exclusion criteria described for Experiments 1 and 2
Stimuli and apparatus
The stimuli used in this experiment were identical to those used in Experiment 2, but were rotated 90° clockwise (see Figure 5). With this rotation, the type of shading pattern on the surface of the ovals changed. One horizontal shading gradient was darker on the left (labeled as “left dark” to participants), and the other horizontal shading gradient was darker on the right (labeled as “right dark”). 
The apparatus used in this experiment was identical to that used in Experiments 1 and 2
Procedure
The procedure used in this experiment was identical to the one used in Experiment 2, with the exception that in the Shape–Shading task, participants classified horizontal shading gradients (i.e., “left dark” vs. “right dark”) rather than vertical shading gradients. No participants reported a stable perception of shape from shading when shown a display containing examples of the stimuli that they were going to encounter during the testing session. 
Results
The analysis used in this experiment was identical to the one used in Experiments 1 and 2
Shape-only task
The main effect of condition (length: M = 649 ms, SEM = 29; width: M = 574 ms, SEM = 23) for response latency was significant, F(1,11) = 19.12, p < 0.001, MSE = 3551.42. In addition, the main effect of block type (baseline: M = 592 ms, SEM = 25; filtering: M = 631 ms, SEM = 26; F(1, 11) = 16.73, p < 0.002, MSE = 1121.41) and the interaction between condition and block type ( F(1, 11) = 8.30, p < 0.02, MSE = 624.27) were also significant. To confirm our predictions, we further probed the interaction by conducting pairwise comparisons contrasting the response latencies in the baseline and filtering blocks of both conditions. In the length condition, response latencies in the baseline blocks ( M = 619 ms, SEM = 26) were significantly faster than the response latencies in the filtering blocks ( M = 679 ms, SEM = 35), t(11) = 4.02, p < 0.002 (see Figure 6A). Similarly, response latencies in the baseline blocks ( M = 565 ms, SEM = 26) of the width condition were significantly faster than those in the filtering blocks ( M = 583 ms, SEM = 21), t(11) = 2.31, p < 0.05. 
Figure 6
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 3. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 3 (ms = milliseconds).
Figure 6
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 3. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 3 (ms = milliseconds).
In the analysis on the number of errors committed, the main effects of condition (length: M = 4.04, SEM = 0.77; width: M = 2.21, SEM = 0.47; F(1, 11) = 5.50, p < 0.04, MSE = 7.33) and block type (baseline: M = 2.38, SEM = 0.41; filtering: M = 3.88, SEM = 0.68; F(1, 11) = 8.03, p < 0.02, MSE = 3.36) were both significant. The condition-by-block type interaction, however, was not significant, F(1,11) = 0.02, p > 0.90, MSE = 5.72. We conducted post-hoc pairwise comparisons using the same logic outlined above (see Figure 6B). This analysis failed to return any significant results. Participants' accuracy in the baseline and filtering blocks of the length condition did not differ significantly (baseline: M = 3.33, SEM = 0.84; filtering: M = 4.75, SEM = 0.92; t(11) = 1.62, p > 0.13). Similarly, the number of errors committed by participants in the baseline and filtering blocks of the width condition did not differ significantly (baseline: M = 1.42, SEM = 0.38; filtering: M = 3.00, SEM = 0.82; t(11) = 1.83, p > 0.09). 
Shape–shading task
The main effect of condition was significant, with participants responding slower overall in the shape condition ( M = 539 ms, SEM = 18), compared to the shading condition ( M = 404, SEM = 30), F(1,11) = 52.83, p < 0.001, MSE = 4176.13. The main effect of block type, however, was not significant, as participants' response latencies were quite similar in both the baseline ( M = 468 ms, SEM = 24) and the filtering blocks ( M = 475 ms, SEM = 22), F(1,11) = 2.11, p > 0.17, MSE = 296.00. The interaction between condition and block type was also not significant, F(1,11) = 0.13, p > 0.72, MSE = 757.91. Next, we conducted a simple main effects analysis to confirm the prediction that width (or length) and shading would not interfere with each other (see Figure 6A). This lack of interference is precisely what we observed, as response latencies in the baseline and filtering blocks did not differ significantly in both the shape condition (baseline: M = 534 ms, SEM = 17; filtering: M = 544 ms, SEM = 21; t(11) = 1.02, p > 0.33) and the shading condition (baseline: M = 402 ms, SEM = 32; filtering: M = 406 ms, SEM = 28; t(11) = 0.50, p > 0.62). 
The main effect of condition was significant in the analysis on the number of errors committed, as participants committed more errors overall in the shape task ( M = 1.83, SEM = 0.32) compared to the shading task ( M = 0.46, SEM = 0.17), F(1,11) = 16.04, p < 0.002, MSE = 1.42. For the main effect of block type, however, participants' error rates did not differ significantly across the baseline blocks ( M = 1.21, SEM = 0.26) and the filtering blocks ( M = 1.08, SEM = 0.27), F(1,11) = 0.11, p > 0.74, MSE = 1.73. The condition-by-block type interaction was also not significant, F(1,11) = 0.22, p > 0.64, MSE = 2.34. Furthermore, when shape and shading were examined independently by post-hoc pairwise comparisons, no significant differences were observed in the number of errors committed in the baseline and filtering blocks for the shape condition (baseline: M = 2.00, SEM = 0.54; filtering: M = 1.67, SEM = 0.45; t(11) = 0.44, p > 0.66) or the shading condition (baseline: M = 0.42, SEM = 0.19; filtering: M = 0.50, SEM = 0.26; t(11) = 0.27, p > 0.79; see Figure 6B). 
Discussion
The results of Experiment 3 provide further evidence that width and length are integral dimensions of object shape. Because these dimensions likely share common processing resources, it is nearly impossible to ignore changes in one dimension while attending the other, and vice versa. Thus, the perception of object shape is achieved holistically. 
In this experiment, we used horizontal shading gradients that, when varied, did not produce a reliable percept of shape from shading. This is in stark contrast to the vertical shading gradients used in Experiment 2, which, when randomly varied, led to a profound percept of shape from shading. However, importantly, the stimuli in both experiments possessed exactly the same differences in luminance polarity. A lack of interference between variations in width (or length) and variations in shading in the present experiment, therefore, suggests that the interference between these attributes observed earlier could not be accounted for by low-level image features such as luminance polarity, and likely arose because the pattern of shading on the surface of the stimuli contributed to the perception of the shape (concave or convex) of those stimuli. However, ruling out differences in luminance polarity as the cause of the interference observed in Experiment 2 does not necessarily rule out the possibility that other factors were operating. For example, perhaps variations in vertical shading gradients are more salient than variations in horizontal shading gradients, and this difference in saliency could account for why we obtained interference in Experiment 2 (which used vertical gradients) but not in Experiment 3 (which used horizontal gradients). To examine this possibility, we conducted a fourth (control) experiment. 
Experiment 4: Investigating gradient orientation
A separate set of participants completed the same tasks that participants in Experiment 2 completed, but instead of using vertical shading gradients that had a gradual change in luminance polarity, vertical shading gradients were used that had a sharp discontinuity in luminance polarity (see Figure 7). In the Shape-Only task, we again predicted that width and length would interfere with each other. In the Shape–Shading task, participants classified the width (or length) of the stimuli while ignoring changes in the direction of luminance polarity (dark on the bottom or dark on the top), and vice versa. Unlike the results from Experiment 2, which also used vertical shading gradients, we predicted that the vertical gradients used in this experiment would not lead to interference between shape and surface properties, precisely because the pattern on the surface of the stimuli used in this experiment did not produce a percept of 3-D shape. 
Figure 7
 
Examples of the stimuli used in Experiment 4. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom or darker on the top). Neither shading pattern led to a perception of 3-D shape.
Figure 7
 
Examples of the stimuli used in Experiment 4. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom or darker on the top). Neither shading pattern led to a perception of 3-D shape.
Participants
Thirteen participants (7 females, 6 males; 7 right-handed, 4 left-handed; mean age = 27.38 years, range = 18–41 years) participated in this experiment. The participants were selected using the same inclusion–exclusion criteria described for all previous experiments. 
Stimuli and apparatus
The stimuli used in this experiment were identical in size and orientation to those used in Experiment 2, but instead of having a gradual change in luminance polarity, the stimuli had a sharp discontinuity in the change from dark to light regions (see Figure 7). Neither pattern led to a percept of 3-D shape, a claim that was verified individually in each participant prior to the beginning of the experimental session. 
The apparatus used in this experiment was identical to that used in Experiments 13
Procedure
The procedure used in this experiment was identical to the one used in Experiment 2
Results
The analysis used in this experiment was identical to the analyses used in all previous experiments. 
Shape-only task
For response latency, the main effect of condition (length: M = 667 ms, SEM = 32; width: M = 567 ms, SEM = 33; F(1,12) = 17.13, p < 0.001, MSE = 7673.25) and the main effect of block type (baseline: M = 598 ms, SEM = 27; filtering: M = 636 ms, SEM = 34; F(1, 12) = 8.98, p < 0.02, MSE = 2110.33) were both significant. The interaction between condition and block type, however, was not significant, F(1,12) = 0.25, p > 0.62, MSE = 1940.42. Using the same logic outlined in previous analyses, we conducted pairwise comparisons contrasting the response latencies in the baseline and filtering blocks of both conditions. Response latencies in the baseline blocks ( M = 551 ms, SEM = 31) of the width condition were significantly faster than the response latencies in the filtering blocks ( M = 583 ms, SEM = 37), t(12) = 2.18, p < 0.05 (see Figure 8A). Similarly, response latencies in the baseline blocks ( M = 645 ms, SEM = 32) of the length condition were significantly faster than those in the filtering blocks ( M = 690 ms, SEM = 35), t(12) = 2.19, p < 0.05. 
Figure 8
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 4. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05. (B) Results for the number of errors committed in both tasks of Experiment 4 (ms = milliseconds).
Figure 8
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 4. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05. (B) Results for the number of errors committed in both tasks of Experiment 4 (ms = milliseconds).
The main effect of condition (width: M = 2.50, SEM = 0.68; length: M = 6.35, SEM = 0.88; F(1, 12) = 15.31, p < 0.002, MSE = 12.56) was the only effect to reach significance in the analysis on the number of errors committed, as both the main effect of block type (baseline: M = 3.85, SEM = 0.63; filtering: M = 5.00, SEM = 0.76; F(1, 12) = 3.26, p > 0.095, MSE = 5.31) and the condition-by-block type interaction ( F(1,12) = 0.01, p > 0.99, MSE = 8.83) were not significant. Pairwise comparisons carried out to investigate the interaction did not return any significant results (see Figure 8B). Specifically, the difference between the number of errors committed by participants in the baseline ( M = 1.92, SEM = 0.79) and filtering blocks ( M = 3.08, SEM = 0.70) of the width condition approached, but did not reach significance, t(12) = 1.93, p = 0.077. In the length condition, this difference was also not significant (baseline: M = 5.77, SEM = 1.06; filtering: M = 6.92, SEM = 1.16; t(12) = 0.86, p > 0.40). 
Shape–shading task
In the analysis on participants' response latencies, the main effect of condition was significant, as participants responded faster overall in the shading condition ( M = 412 ms, SEM = 14), compared to the shape condition ( M = 556, SEM = 28), F(1,12) = 51.89, p < 0.001, MSE = 5139.94. The main effect of block type, however, was not significant, as response latencies in both the baseline ( M = 482 ms, SEM = 20) and the filtering blocks ( M = 486 ms, SEM = 21) were nearly identical, F(1,12) = 0.64, p > 0.43, MSE = 482.38. Moreover, the interaction between condition and block type was also not significant, F(1,12) = 0.10, p > 0.75, MSE = 329.90. As was our approach in Experiments 2 and 3, we conducted a simple main effects analysis to investigate whether shape (width or length) and shading judgments interfered with each other (see Figure 8A). In line with our initial prediction, no interference was observed between these two stimulus attributes, as response latencies in the baseline ( M = 552 ms, SEM = 27) and filtering ( M = 559 ms, SEM = 31) blocks in the shape condition did not differ significantly from each other, t(12) = 0.67, p > 0.51. This pattern of results was also observed in the shading condition (baseline: M = 411 ms, SEM = 16; filtering: M = 414 ms, SEM = 13; t(12) = 0.59, p > 0.56). 
The analysis on the number of errors yielded significant results for the main effect of condition (shape: M = 3.85, SEM = 0.78; shading: M = 1.19, SEM = 0.42; F(1,12) = 14.33, p < 0.003, MSE = 6.39), but the main effect of block type (baseline: M = 2.62, SEM = 0.46; filtering: M = 2.42, SEM = 0.63; F(1,12) = 0.23, p > 0.63, MSE = 2.06) and the condition-by-block type interaction ( F(1,12) = 1.56, p > 0.23, MSE = 1.49) did not reach significance. Furthermore, no significant differences were observed on the number of errors committed in the baseline and filtering blocks for both the shape condition (baseline: M = 4.15, SEM = 0.80; filtering: M = 3.54, SEM = 0.88; t(12) = 0.97, p > 0.35) and the shading condition (baseline: M = 1.08, SEM = 0.27; filtering: M = 1.31, SEM = 0.59; t(12) = 0.61, p > 0.55; see Figure 8B). 
Discussion
For the fourth time in this study, we demonstrated that changing the width of an object interfered with judgments about that object's length, and vice versa. These results overwhelmingly suggest that the shape of an object is perceived holistically, and that the width and the length of an object are integral dimensions that likely share common processing resources—one cannot help but notice changes in one dimension when explicitly attending to the other. 
In Experiment 2, interference between shape and shading was observed using stimuli that had vertical shading gradients, and in Experiment 3, no interference was observed between these attributes when we used horizontal shading gradients that possessed the same difference in luminance polarity as the stimuli used in Experiment 2. Based on this result, we reasoned that the interference in Experiment 2 was not caused by low-level image features such as luminance polarity. The stimuli did differ in the orientations of the shading gradients used, however, and thus we could not rule this out as a factor that contributed to the pattern of the results observed (i.e., perhaps changes in vertical shading gradients are more salient than changes in horizontal shading gradients). Thus, in Experiment 4, we used vertical black and white patterns that did not create a percept of 3-D shape and reasoned that in this situation, varying the direction of this pattern on the surface of the stimuli would not interfere with judgments of width or length (and vice versa). This is precisely what we found, which suggests that the interference between shape and shading observed in Experiment 2 did not arise because reversing the polarity of vertical shading gradients is particularly distracting (i.e., more distracting than reversing the polarity of horizontal shading gradients). We are confident therefore that the interference between the shape of an object and its shading in Experiment 2 arose because the pattern of shading on the surface of the stimuli contributed to the perception of the shape (concave or convex) of those stimuli. As we mentioned previously, this suggests that the shape of an object and the pattern of shading on its surface must therefore share common processing resources at some stage of visual processing. 
General discussion
The results from these experiments clearly demonstrate that two aspects of an object's shape, namely its width and length, share common processing resources. In all four experiments, participants could not ignore changes in one dimension (e.g., length) while attending the other (e.g., width), and vice versa. These findings are entirely consistent with our previous behavioral study (Cant et al., 2008) and with numerous other studies that have used the Garner task to show that width and length are integral dimensions of object shape (Dykes & Cooper, 1978; Felfoldy, 1974; Ganel & Goodale, 2003; Macmillan & Ornstein, 1998). Taken together, the evidence is overwhelming that object shape is perceived holistically, in that the visual system cannot process one dimension, such as width, independently of another dimension, such as length. 
In an earlier study (Cant et al., 2008), we showed that variations in the surface properties of an object had no effect on judgments of length and width, suggesting that surface cues and shape are processed independently in the visual system, a conclusion that is consistent with evidence from both neuroimaging and neuropsychological studies (Cant et al., 2009; Cant & Goodale, 2007; Goodale & Milner, 2004; Humphrey et al., 1994; James et al., 2003; Milner et al., 1991). However, in Cant et al.'s study, the changes in the surface cues did not contribute to changes in the perceived shape of the object. In other words, the surface properties of the objects that we used (wood and brick textures rendered in beige and yellow) were intimately related to the material those objects were made of, but not their shape. It was perhaps not surprising, therefore, that shape and surface properties did not interfere with each other. However, as we discussed earlier in the Introduction, if changes in surface cues resulted in changes in the perception of an object's 3-D shape, then we might expect that such changes would interfere with judgments of the object's width or length. In fact, this is exactly what we found in Experiments 1 and 2 of the present study, in which changes in texture (Experiment 1) and changes in the polarity of shading (Experiment 2) interfered with judgments of the length and width of those same stimuli. However, this interference was not symmetrical; in other words, varying width and length did not interfere with judgments about surface cues, suggesting that it is possible to attend to the surface properties of an object without attending to changes in the dimensions of that object, even when the surface cues affect the perception of the object's shape (but see below). Finally, the results of Experiments 3 and 4 show that the effects of variations in shading on judgments of length or width that we observed in Experiment 2 were not due to changes in low-level image features such as luminance polarity or gradient orientation, but instead to powerful shape-from-shading cues. 
It is interesting to note that the interference between shape and shading in Experiment 2 appeared stronger than the interference between shape and texture in Experiment 1. The precise reason for this difference is not immediately clear, but perhaps it relates to differences in the magnitudes of the shape-from-surface effects in the experiments. For example, in Experiment 1 the impression of 3-D shape randomly varied between a convex and a flat surface, while in Experiment 2, this impression randomly varied between a convex and a concave surface. This difference in the magnitude of variations in depth across the two experiments could have produced a stronger sense of 3-D shape in Experiment 2 compared to Experiment 1, and this difference, in turn, may have produced stronger interference in Experiment 2 (compared to Experiment 1). Perhaps the strength of the interference across the two experiments would have been more similar if, similar to the shading gradients used in Experiment 2, texture gradients were used that created perceptions of convex and concave surfaces. At any rate, the results from Experiments 1 and 2 demonstrate that variations in surface cues that contribute to 3-D shape perception interfere with judgments about shape that are based on the width or the length of an object. 
The observation that surface properties can contribute to the perception of object shape is consistent with numerous psychophysical and neuroimaging studies on the perception of shape from shading and the perception of shape from texture (Georgieva et al., 2008; Humphrey et al., 1997, 1996; Kleffner & Ramachandran, 1992; Li & Zaidi, 2000, 2001; Ramachandran, 1988a, 1988b; Tsutsui et al., 2002). However, we believe that the findings of the present study, coupled with those from our earlier fMRI and behavioral studies, go well beyond this and provide important insights into the nature of shape and surface-property processing in the visual system. Taken together, it seems that when surface cues are being used to determine the material properties of objects, the processing of those cues is quite independent from the processing of object shape. When surface cues are being used to determine the shape of the object, however, then the processing of these cues is not completely independent of the processing of the contours of the object. Presumably this asymmetry in interference is related to the differences in attentional deployment. At the same time, the asymmetry in interference suggests that functional organization of the processing of these different object properties in the ventral visual stream may be quite dynamic and flexible. 
We have shown previously that the processing of object shape is localized in lateral regions of the ventral stream, most consistently in area LO (Cant et al., 2009; Cant & Goodale, 2007). In contrast, we showed that the processing of surface properties is localized in more medial regions of the ventral stream, most consistently in the CoS. In both of these studies, the surface properties of the objects used were closely associated with the material from which they were made (e.g., marble and tin foil). Since the way in which we come to appreciate the material properties of an object (e.g., mass, compliance, fragility) is via an initial direct handling of that object (picking an object up to appreciate its mass, deforming it with our fingers to appreciate its compliance and fragility), it is interesting to note that the processing of material properties (via surface cues) in our neuroimaging studies is in the vicinity of areas in medial temporal lobe that have been implicated in memory. From our initial direct handling of an object, we form associations related to the material properties of that object, such as “steel is hard”, or “marshmallow is compliant”. Moreover, we make associations between the visual appearance of the surface of that object and its material properties, such as “steel is shiny”, or “marshmallow is white”. Presumably, these associations are stored in memory, and, in subsequent encounters, direct handling of the object is not necessary to recognize that object's material properties. For example, when we see a steel rod, we immediately know that it is rigid and relatively heavy, even before we pick it up. In summary, the perception of material properties via surface-based cues (such as color and texture) is achieved through the association of information across multiple sensory modalities (e.g., somatosensation and vision), and this processing occurs in high-order regions of the ventral stream, in the vicinity of areas of cortex dedicated to processing episodic memory. Interestingly, a recent neuroimaging experiment in our laboratory showed that the processing of material properties through sight and sound activates regions adjacent to one another in parahippocampal cortex (Arnott, Cant, Dutton, & Goodale, 2008). Moreover, a series of studies has suggested that instead of being dedicated to processing either episodic memory or place-related information, the parahippocampal cortex should be seen as a region that processes contextual associations in general (Aminoff, Gronau, & Bar, 2007; Bar & Aminoff, 2003; Bar, Aminoff, & Ishai, 2008). The findings from our neuroimaging studies on the perception of material properties are certainly consistent with this idea. Furthermore, these findings also show that there is a clear separation in the processing pathways for shape and surface properties (that are linked to material properties) in occipito-temporal cortex, a finding that is quite consistent with our previous behavioral findings of no interference between form and surface (material) properties (Cant et al., 2008). 
But what about the interaction between the processing of shape and the processing of surface cues? If, as the data suggests, there is shared processing for shape and surface cues, then what are the anatomical substrates of that shared processing? One could begin by asking where the processing of shape from shading and shape from texture occurs. Neuroimaging and neurophysiological studies on humans and monkeys suggests that shape from shading is a relatively low-level phenomena and may be processed as early as V1 or V2 (Humphrey et al., 1997; Lee, Yang, Romero, & Mumford, 2002; Mamassian, Jentzsch, Bacon, & Schweinberger, 2003; Smith, Kelly, & Lee, 2007). Indeed, response latencies in the shading condition of Experiments 24 were significantly faster than the latencies in the shape condition, which suggests that the processing of shading occurs relatively early in visual analysis. Moreover, computational models have suggested that shape from shading is processed in early visual cortical areas (e.g., Lehky & Sejnowski, 1988), and numerous psychophysical studies have demonstrated that shape-from-shading stimuli elicit perceptual pop-out effects (Braun, 1993; Kleffner & Ramachandran, 1992; Ramachandran, 1988a, 1988b). It has been argued that perceptual pop-out is a hallmark of pre-attentive processing, and the fact that this type of processing has been localized to early areas of visual cortex (e.g., Kimura, Katayama, & Murohashi, 2005) is certainly consistent with the idea that the perception of shape from shading is a relatively low-level phenomena. In fact, the visual-form agnosia patient DF, who has selective bilateral lesions to area LO but has sparing of primary visual cortex bilaterally, could perform region-segregation tasks on the basis of shape from shading (Humphrey et al., 1996). The initial extraction of shading gradients appears to happen early in visual processing, just like edges and color. Presumably, all this information must be integrated later in the visual pathways. 
Compared to the neural correlates of shape from shading, far fewer studies have investigated the neural correlates of shape from texture in the ventral stream. Although, as we have already discussed, there is evidence that texture itself is processed in medial regions of the ventral stream, there is little information about where shape from texture is extracted. A recent neuroimaging study, however, has shown that regions in the lateral occipital complex in the vicinity of area LO were particularly sensitive to processing both shape from texture and shape from shading (Georgieva et al., 2008). It seems that this region of cortex, which encompasses area LO, may integrate a number of different visual cues (e.g., texture, shading, contour) to construct a representation of 3-D object shape. Taken together, these observations suggest that the locus of interference between shape and surface properties in the present study is likely to be area LO. In other words, although texture, shading, and edges may be extracted from the visual array in early cortical areas, these cues are then integrated into some sort of three-dimensional geometric representation of object shape in area LO. 
These anatomical speculations resonate with the asymmetrical Garner interference that we observed in our study. When shading is being combined with width (or length) to construct an overall three-dimensional percept of an object, then area LO is recruited. However, when shading information is being assessed without reference to shape, then the processing can take place in early visual areas without recruiting area LO (at least not as much). Thus, one would expect to see an asymmetry in Garner interference with these differences in task requirements (and presumably attentional deployment). One must be cautious here however. The inferences we are making about anatomical substrates are based on neuroimaging studies that did not use Garner tasks. In fact, using the Garner paradigm in conjunction with neuroimaging could be a useful way of studying the way in which different brain regions are recruited in the processing of object attributes and how the pattern of recruitment changes with task demands. 
We must also be cautious not to over-interpret the asymmetric pattern of interference between shape and surface properties observed in Experiments 1 and 2. We do not mean to imply that changes in the shape of an object never contribute to the perception of that object's surface. Quite the contrary, Ramachandran (1988b), for example, demonstrated that the impression of the surface of two identical luminance gradients can be dramatically altered simply by changing the shape of the contour at the top and the bottom of each gradient. Moreover, Knill and Kersten (1991) demonstrated that the perception of the lightness of a surface is affected by variations in the contour of that surface (i.e., varying the contour of stimuli between jagged and curved caused participants to perceive surfaces as having either a step change in brightness or a uniform level of brightness, respectively). Nevertheless, this relationship was not evident in the stimuli that we used in our study; participants could effectively ignore changes in length (or width) while classifying surface texture (or shading). Why would this be the case, in light of convincing demonstrations that changes in contour can contribute to surface perception? Perhaps it was not the stimuli per se, but rather the nature of the requirements of the surface-properties task we used that failed to produce symmetric interference between shape and surface cues. For example, the asymmetric pattern of interference may have arisen because we required participants to classify the curvature of the textured lines (“pattern A” vs. “pattern B”) and the direction of the shading gradient (“bottom dark” vs. “top dark”), instead of classifying the impression of 3-D shape that variations in these attributes produced (i.e., convex vs. flat in Experiment 1, and convex vs. concave in Experiment 2). In the version of the task that we used, participants could have selectively attended to a particular region of the surface to make their texture or shading judgments (e.g., focus on the bottom or the top of each oval, which would be a sufficient strategy to classify surface pattern in both Experiments 1 and 2), and this could have produced a weaker impression of 3-D shape than that experienced when judging width or length (which required attention to the entire horizontal or vertical extent of the object). Perhaps if we had forced the participants to classify the 3-D shape that they inferred from variations in a surface property, we might have observed symmetric interference between shape defined by contour and shape defined by surface cues. That is, changing the width (or length) of an object might interfere with explicit judgments about the sign of that object's surface curvature (concave or convex), and vice versa. Indeed, we would predict that this would be the case. Nevertheless, having participants concentrate on differences in the texture patterns (or the shading) rather than on differences in the 3-D shape inferred from those patterns is similar to what might occur when people are making judgements about the material properties of an object rather than its shape. 
The above discussion about interactions between contour and surface properties deserves further qualification. It may indeed be the case that varying the outline contour of an object (e.g., varying width or length) affects the perception of 3-D shape (e.g., surface curvature), and vice versa. However, this relationship is likely to be more complex than this and may be subject to numerous contextual factors. For instance, we demonstrated that varying contour does not affect surface-property perception, but, as we mentioned in the above paragraph, this result might have been related to task demands and attentional deployment. Moreover, a given 2-D outline (e.g., a circle) can give rise to a number of different 3-D shapes (e.g., a sphere with a circular base vs. a cone with a circular base), and in this situation, it is not clear how the perception of the surface affects the perception of the contour. Our guess is that the perception of 3-D shape requires the integration of information from multiple axes of the object. For example, when analyzing an object, the visual system integrates information about width (wide vs. narrow), length (long vs. short), and depth (concave vs. convex) to create an overall 3-D impression. During this analysis, variations in one dimension may influence the impression of 3-D shape, perhaps by introducing structured noise into the system, which effectively lengthens the amount of time required to perform the analysis. Implicit in this account is the notion that the same brain region (or network) is performing these computations (likely area LO), such that variations in one dimension will interfere with judgments of another. Whether this effect is due to altered perception or is simply due to increased processing time is an open question. On the one hand, it may not be the case that a sense of surface curvature is absolutely essential to judge the width or length of a particular outline. Instead, it simply might take longer to make this judgment because the same region is processing these dimensions (which both contribute to 3-D shape perception). In support of this idea, the perceptual error rates in all four experiments were quite low, suggesting that participants' perception of different attributes may not have been reliably altered, but the length of time to form those perceptions was. On the other hand, a more parsimonious explanation of these effects may require invoking an interactionist model, where both perception and processing times are affected by varying different aspects of object shape that are processed within the same local brain region. These ideas are speculative, of course, and in the future we plan on conducting Garner experiments utilizing different shape tasks to shed more light on the complex relationships between contour, surface, and 3-D shape perception. 
In conclusion, we have provided evidence that variations in surface cues that produce different impressions of 3-D shape interfere with judgements of width and length, and this suggests that at some level of analysis these attributes of shape share common processing resources. These findings differ from those of our previous study, where variations in surface cues that produced different impressions of material properties did not interfere with shape judgments (Cant et al., 2008). Taken together, these findings suggest that surface cues that contribute to object shape are processed quite separately from surface cues that are linked to an object's material properties. This conclusion is not meant to imply that in some cases, surface cues are only linked to object shape, while in others, surface cues are only linked to material properties. Instead, we would imply that these findings, along with the findings from our previous neuroimaging studies (Cant et al., 2009; Cant & Goodale, 2007) suggest that different patterns of visual processing and brain activation will result from selective attention to different aspects of the same visual surface. In the stimuli used in Experiments 1 and 2 of the present study, the connection between surface and shape perception was much more salient than any connection between surface and material perception (and this relationship was constructed deliberately), and the opposite was true in our previous behavioral study. However, real-world surfaces provide information about both geometric structure and material properties, and thus it would be informative to use stimuli that capture these real-world relationships to investigate the behavioral and neural interactions between shape, surface, and material perception in greater detail. We are currently designing an fMRI experiment to explore this issue. 
Acknowledgments
We would like to thank Patrick Cavanagh and two anonymous reviewers for helpful comments on an earlier version of this manuscript. This research was supported by the Canadian Institutes of Health Research (MAG) and the Canada Research Chairs Program (MAG). 
Commercial relationships: none. 
Corresponding author: Melvyn A. Goodale. 
Email: mgoodale@uwo.ca. 
Address: Department of Psychology, The University of Western Ontario, London, Ontario N6A 5C2, Canada. 
References
Aminoff, E. Gronau, N. Bar, M. (2007). The parahippocampal cortex mediates spatial and nonspatial associations. Cerebral Cortex, 17, 1493–1503. [PubMed] [CrossRef] [PubMed]
Arnott, S. R. Cant, J. S. Dutton, G. N. Goodale, M. A. (2008). Crinkling and crumpling: An auditory fMRI study of material properties. Neuroimage, 43, 368–378. [PubMed] [CrossRef] [PubMed]
Bar, M. Aminoff, E. (2003). Cortical analysis of visual context. Neuron, 38, 347–358. [PubMed] [CrossRef] [PubMed]
Bar, M. Aminoff, E. Ishai, A. (2008). Famous faces activate contextual associations in the parahippocampal cortex. Cerebral Cortex, 18, 1233–1238. [PubMed] [CrossRef] [PubMed]
Braun, J. (1993). Shape-from-shading is independent of visual attention and may be a ‘texton’. Spatial Vision, 7, 311–322. [PubMed] [CrossRef] [PubMed]
Cant, J. S. Arnott, S. R. Goodale, M. A. (2009). fMR-adaptation reveals separate processing regions for the perception of form and texture in the human ventral stream. Experimental Brain Research, 192, 391–405. [PubMed] [CrossRef] [PubMed]
Cant, J. S. Goodale, M. A. (2007). Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cerebral Cortex, 17, 713–731. [PubMed] [CrossRef] [PubMed]
Cant, J. S. Large, M. E. McCall, L. Goodale, M. A. (2008). Independent processing of form, colour, and texture in object perception. Perception, 37, 57–78. [PubMed] [CrossRef] [PubMed]
Dick, M. Hochstein, S. (1988). Interactions in the discrimination and absolute judgement of orientation and length. Perception, 17, 177–189. [PubMed] [CrossRef] [PubMed]
Duvelleroy-Hommet, C. Gillet, P. Cottier, J. P. de Toffol, B. Saudeau, D. Corcia, P. (1997). Cerebral achromatopsia without prosopagnosia, alexia, object agnosia. Revue Neurologique (Paris), 153, 554–560. [PubMed]
Dykes, J. R. Cooper, R. G. (1978). An investigation of the perceptual basis of redundancy gain and orthogonal interference for integral dimensions. Perception & Psychophysics, 23, 36–42. [PubMed] [CrossRef] [PubMed]
Felfoldy, G. L. (1974). Repetition effects in choice reaction time to multidimensional stimuli. Perception & Psychophysics, 15, 453–459. [CrossRef]
Ganel, T. Goodale, M. A. (2003). Visual control of action but not perception requires analytical processing of object shape. Nature, 426, 664–667. [PubMed] [CrossRef] [PubMed]
Garner, W. R. (1974). The processing of information and structure. Potomac, MD: L Erlbaum Associates; distributed by Halsted Press, New York.
Georgieva, S. S. Todd, J. T. Peeters, R. Orban, G. A. (2008). The extraction of 3D shape from texture and shading in the human brain. Cerebral Cortex, 18, 2416–2438. [PubMed] [Article] [CrossRef] [PubMed]
Goodale, M. A. Milner, A. D. (2004). Sight unseen: An exploration of conscious and unconscious vision. Oxford, UK: Oxford University Press.
Heywood, C. A. Gaffan, D. Cowey, A. (1995). Cerebral achromatopsia in monkeys. European Journal of Neuroscience, 7, 1064–1073. [PubMed] [CrossRef] [PubMed]
Heywood, C. A. Kentridge, R. W. (2003). Achromatopsia, color vision, and cortex. Neurologic Clinics, 21, 483–500. [PubMed] [CrossRef] [PubMed]
Heywood, C. A. Kentridge, R. W. Cowey, A. (1998). Form and motion from colour in cerebral achromatopsia. Experimental Brain Research, 123, 145–153. [PubMed] [CrossRef] [PubMed]
Humphrey, G. K. Goodale, M. A. Bowen, C. V. Gati, J. S. Vilis, T. Rutt, B. K. (1997). Differences in perceived shape from shading correlate with activity in early visual areas. Current Biology, 7, 144–147. [PubMed] [CrossRef] [PubMed]
Humphrey, G. K. Goodale, M. A. Jakobson, L. S. Servos, P. (1994). The role of surface information in object recognition: Studies of a visual form agnosic and normal subjects. Perception, 23, 1457–1481. [PubMed] [CrossRef] [PubMed]
Humphrey, G. K. Symons, L. A. Herbert, A. M. Goodale, M. A. (1996). A neurological dissociation between shape from shading and shape from edges. Behavior Brain Research, 76, 117–125. [PubMed] [CrossRef]
James, T. W. Culham, J. Humphrey, G. K. Milner, A. D. Goodale, M. A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: An fMRI study. Brain, 126, 2463–2475. [PubMed] [CrossRef] [PubMed]
Kimura, M. Katayama, J. Murohashi, H. (2005). Neural correlates of pre-attentive and attentive processing of visual changes. Neuroreport, 16, 2061–2064. [PubMed] [CrossRef] [PubMed]
Kleffner, D. A. Ramachandran, V. S. (1992). On the perception of shape from shading. Perception & Psychophysics, 52, 18–36. [PubMed] [CrossRef] [PubMed]
Knill, D. C. Kersten, D. (1991). Apparent surface curvature affects lightness perception. Nature, 351, 228–230. [PubMed] [CrossRef] [PubMed]
Lee, T. S. Yang, C. F. Romero, R. D. Mumford, D. (2002). Neural activity in early visual cortex reflects behavioral experience and higher-order perceptual saliency. Nature Neuroscience, 5, 589–597. [PubMed] [CrossRef] [PubMed]
Lehky, S. R. Sejnowski, T. J. (1988). Network model of shape-from-shading: Neural function arises from both receptive and projective fields. Nature, 333, 452–454. [PubMed] [CrossRef] [PubMed]
Li, A. Zaidi, Q. (2000). Perception of three-dimensional shape-from-texture is based on patterns of oriented energy. Vision Research, 40, 217–242. [PubMed] [CrossRef] [PubMed]
Li, A. Zaidi, Q. (2001). Information limitations in perception of shape from texture. Vision Research, 41, 1519–1534. [PubMed] [CrossRef] [PubMed]
Li, A. Zaidi, Q. (2003). Observer strategies in perception of 3-D shape from isotropic textures: Developable surfaces. Vision Research, 43, 2741–2758. [PubMed] [CrossRef] [PubMed]
Lipshits, M. McIntyre, J. Zaoui, M. Gurfinkel, V. Berthoz, A. (2001). Does gravity play an essential role in the asymmetrical visual perception of vertical and horizontal line length? Acta Astronautica, 49, 123–130. [PubMed] [CrossRef] [PubMed]
Macmillan, N. A. Ornstein, A. S. (1998). The mean-integral representation of rectangles. Perception & Psychophysics, 60, 250–262. [PubMed] [CrossRef] [PubMed]
Mamassian, P. Jentzsch, I. Bacon, B. A. Schweinberger, S. R. (2003). Neural correlates of shape from shading. Neuroreport, 14, 971–975. [PubMed] [PubMed]
Milner, A. D. Perrett, D. I. Johnston, R. S. Benson, P. J. Jordan, T. R. Heeley, D. W. (1991). Perception and action in ‘visual form agnosia’. Brain, 114, 405–428. [PubMed] [CrossRef] [PubMed]
Prinzmetal, W. Gettleman, L. (1993). Vertical–horizontal illusion: One eye is better than two. Perception & Psychophysics, 53, 81–88. [PubMed] [CrossRef] [PubMed]
Ramachandran, V. S. (1988a). Perceiving shape from shading. Scientific American, 259, 76–83. [PubMed] [CrossRef]
Ramachandran, V. S. (1988b). Perception of shape from shading. Nature, 331, 163–166. [PubMed] [CrossRef]
Shechter, S. Hochstein, S. (1992). Asymmetric interactions in the processing of the visual dimensions of position, width, and contrast of bar stimuli. Perception, 21, 297–312. [PubMed] [CrossRef] [PubMed]
Shevell, S. K. Kingdom, F. A. (2008). Color in complex scenes. Annual Review of Psychology, 59, 143–166. [PubMed] [CrossRef] [PubMed]
Smith, M. A. Kelly, R. C. Lee, T. S. (2007). Dynamics of response to perceptual pop-out stimuli in macaque V1. Journal of Neurophysiology, 98, 3436–3449. [PubMed] [Article] [CrossRef] [PubMed]
Todd, J. T. Norman, J. F. Koenderink, J. J. Kappers, A. M. (1997). Effects of texture, illumination, and surface reflectance on stereoscopic shape perception. Perception, 26, 807–822. [PubMed] [CrossRef] [PubMed]
Tsutsui, K. Sakata, H. Naganuma, T. Taira, M. (2002). Neural correlates for perception of 3D surface orientation from texture gradient. Science, 298, 409–412. [PubMed] [CrossRef] [PubMed]
Zaidi, Q. Li, A. (2002). Limitations on shape information provided by texture cues. Vision Research, 42, 8115–835. [PubMed] [CrossRef]
Figure 1
 
Examples of the stimuli used in Experiment 1. The stimuli could vary along two shape dimensions (width and length) and could vary between two texture patterns (curved lines that gave a convex appearance or straight lines that gave a flat appearance).
Figure 1
 
Examples of the stimuli used in Experiment 1. The stimuli could vary along two shape dimensions (width and length) and could vary between two texture patterns (curved lines that gave a convex appearance or straight lines that gave a flat appearance).
Figure 2
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, texture irrelevant; texture: texture relevant, width or length irrelevant) from Experiment 1. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Texture task (width or length and texture judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 1 (ms = milliseconds).
Figure 2
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, texture irrelevant; texture: texture relevant, width or length irrelevant) from Experiment 1. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Texture task (width or length and texture judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 1 (ms = milliseconds).
Figure 3
 
Examples of the stimuli used in Experiment 2. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom and appeared convex, or darker on the top and appeared concave).
Figure 3
 
Examples of the stimuli used in Experiment 2. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom and appeared convex, or darker on the top and appeared concave).
Figure 4
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 2. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). ** p < 0.01; ^ p = 0.058. (B) Results for the number of errors committed in both tasks of Experiment 2. * p < 0.05; ms = milliseconds.
Figure 4
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 2. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). ** p < 0.01; ^ p = 0.058. (B) Results for the number of errors committed in both tasks of Experiment 2. * p < 0.05; ms = milliseconds.
Figure 5
 
Examples of the stimuli used in Experiment 3, in which the stimuli used in Experiment 2 were rotated by 90° in a clockwise direction (thus, width was now the vertical dimension, and length was now the horizontal dimension). The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (horizontal shading gradients that were darker on the left or darker on the right). Neither shading pattern led to a profound impression of concavity or convexity.
Figure 5
 
Examples of the stimuli used in Experiment 3, in which the stimuli used in Experiment 2 were rotated by 90° in a clockwise direction (thus, width was now the vertical dimension, and length was now the horizontal dimension). The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (horizontal shading gradients that were darker on the left or darker on the right). Neither shading pattern led to a profound impression of concavity or convexity.
Figure 6
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 3. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 3 (ms = milliseconds).
Figure 6
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 3. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 12 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05; ** p < 0.01. (B) Results for the number of errors committed in both tasks of Experiment 3 (ms = milliseconds).
Figure 7
 
Examples of the stimuli used in Experiment 4. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom or darker on the top). Neither shading pattern led to a perception of 3-D shape.
Figure 7
 
Examples of the stimuli used in Experiment 4. The stimuli could vary along two shape dimensions (width and length) and could vary between two shading patterns (vertical shading gradients that were darker on the bottom or darker on the top). Neither shading pattern led to a perception of 3-D shape.
Figure 8
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 4. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05. (B) Results for the number of errors committed in both tasks of Experiment 4 (ms = milliseconds).
Figure 8
 
Results for each judgment in each task (width: width is the relevant attribute, length is the irrelevant attribute; length: length relevant and width irrelevant; shape: width or length relevant, shading irrelevant; shading: shading relevant, width or length irrelevant) from Experiment 4. Light bars represent baseline blocks, where only the relevant attribute varies, and dark bars represent filtering blocks, where both the relevant and irrelevant attributes vary. All statistical comparisons are between the absence versus the presence of variation on the irrelevant attribute for each task (i.e., between the light and the dark bars for each task). Results are based on data from 13 participants, in a repeated-measures design. Error bars indicate 95% confidence intervals derived using the mean square error term from the repeated-measures analyses of variance. (A) Results for participants' response latencies in the Shape-only task (width and length judgments) and the Shape–Shading task (width or length and shading judgments). * p < 0.05. (B) Results for the number of errors committed in both tasks of Experiment 4 (ms = milliseconds).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×