Free
Research Article  |   November 2008
The representation of subordinate shape similarity in human occipitotemporal cortex
Author Affiliations
Journal of Vision November 2008, Vol.8, 9. doi:https://doi.org/10.1167/8.10.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sven Panis, Joris Vangeneugden, Hans P. Op de Beeck, Johan Wagemans; The representation of subordinate shape similarity in human occipitotemporal cortex. Journal of Vision 2008;8(10):9. https://doi.org/10.1167/8.10.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We investigated the coding of subordinate shape similarity in human object-selective cortex in two event-related functional magnetic resonance adaptation (fMR-A) experiments. Previous studies using faces have concluded that there is a narrow tuning of neuronal populations selective to each face, and that tuning is relative to the expected “average” face (norm-based encoding). Here we investigated these issues using outlines of animals and tools occupying a particular position on different morphing sequences per category. In a first experiment, we inferred the width of neural tuning to exemplars by examining whether the release from adaptation with increasing shape changes between two stimuli asymptotes. In a second experiment, we compared the response to central and extreme positions in shape space while controlling for the number of presentations of each unique stimulus to study whether the expected “average” category exemplar plays a role. The current fMR-A results show that a small change in exemplar shape produces a large release of adaptation, but only for outline shape changes of animals and not for man-made tools. Furthermore, our results suggested that central and extreme positions were not treated differently. Together, these results suggest a narrow tuning in object-selective cortex for individual exemplars from natural object categories, consistent with an exemplar-based encoding principle.

Introduction
The neuronal basis underlying visual object recognition and shape representation is currently the focus of much neuroscientific research (Connor, Brincat, & Pasupathy, 2007; DiCarlo & Cox, 2007). Single-cell studies have reported that macaque inferotemporal (IT) neurons respond selectively to a limited set of moderately complex two- or three-dimensional object features in a particular configural relation, such as combinations of multiple, more or less curved contour fragments (Brincat & Connor, 2004, 2006; see also Pasupathy & Connor, 2001, 2002) or combinations of shape and texture or color (Tanaka, 1993, 1996, 2000, 2003; Wang, Fujita, & Murayama, 2000, 2003). 
Computational modeling shows how many of the observed IT response properties can emerge based on feature-based feedforward processing at the highest level of a hierarchically organized system in which specificity and invariance is gradually build up to produce partially (more or less) distributed object representations (O'Toole, Jiang, Abdi, & Haxby, 2005; Riesenhuber & Poggio, 1999; Rolls & Deco, 2002). Recent theories suggest that an object can be represented by the pattern of activity across a number of view-tuned units whose response falls off monotonically with decreasing similarity to their preferred shapes (Edelman, 1998, 1999; Palmeri & Gauthier, 2004; see also Vanrie, Béatse, Wagemans, Sunaert, & Van Hecke, 2002). Indeed, using different sets of parameterized shapes located in a two-dimensional shape space, Op de Beeck, Wagemans, and Vogels (2001) demonstrated that the responses of IT neurons decreased monotonically with increasing parametric distance from their preferred shape in each stimulus set (or class). Generalization to new views (recognition) and new exemplars (categorization) seems possible by linearly combining the outputs of such image-based, exemplar view-tuned neurons (Gauthier & Palmeri, 2002; Poggio & Bizzi, 2004). 
In humans, functional magnetic resonance imaging (fMRI) techniques have been employed to define the object-selective cortical regions, referred to as the lateral occipital complex or LOC (Malach et al., 1995). Neuroimaging and single-cell studies suggest a possible homology between macaque IT and the LOC since both areas display a similar anterior–posterior gradient in object adaptation, size invariance (Sawamura, Georgieva, Vogels, Vanduffel, & Orban, 2005), scrambling sensitivity (Grill-Spector et al., 1998; Lerner, Hendler, Ben-Bashat, Harel, & Malach, 2001; Vogels, 1999a), and a comparable amount of cue-invariant object selectivity (Grill-Spector et al., 1999; Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Logothetis & Sheinberg, 1996; Sary, Vogels, & Orban, 1993; Vogels & Orban, 1996). 
A recently developed technique called fMR adaptation (fMR-A) has frequently been used to study the invariant properties and/or tuning of neural subpopulations within fMRI voxels (Grill-Spector & Malach, 2001; see also Grill-Spector, Henson, & Martin, 2006; Sawamura, Orban, & Vogels, 2006; Sayres & Grill-Spector, 2006). In the event-related (ER) version (Kourtzi & Kanwisher, 2000, 2001), the level of activation is compared between trials in which one stimulus is presented twice and trials in which two different stimuli are presented. If the neurons that are adapted when a stimulus is shown twice are not sensitive to the stimulus difference, then the level of activation will be the same in the two trial types. However, if neurons are selective for the property change, then recovery of adaptation will be observed in trials in which two different stimuli are presented. 
Consistent with exemplar-based theories, a number of fMR-A (Gilaie-Dotan & Malach, 2007; Jiang, Blanz, & Riesenhuber, 2007) and computational studies (Jiang et al., 2006) using morphing between individual faces have concluded that faces are represented through a sparse population code of neurons that are sharply tuned to different individual face exemplars. In addition, single-cell (Leopold, Bondar, & Giese, 2006), fMR-A (Loffler, Yourganov, Wilkinson, & Wilson, 2005), computational (Giese & Leopold, 2005), and behavioral studies (Leopold, O'Toole, Vetter, & Blanz, 2001; Wilson, Loffler, & Wilkinson, 2002) using morphed faces from a multidimensional face space where the average face occupies the central position suggest that faces and possibly other complex patterns are represented by contrastive neural mechanisms that reference to the central tendency of the stimulus category. For example, Loffler et al. (2005) concluded that individual faces are encoded by their direction and distance from a prototypical (average) face, with different neural populations responding at all distances from the average face within a restricted range of directions. 
However, faces (and to a lesser extent other natural objects such as landscapes, body-parts, etc.) are regularly considered as constituting a special object category requiring specific processes (Downing, Jiang, Shuman, & Kanwisher, 2001; Epstein & Kanwisher, 1998; Kanwisher, McDermott, & Chun, 1997; Yovel & Kanwisher, 2005). Thus, in this study, we wanted to investigate whether similar effects, a narrow tuning for individual exemplars with reference to a prototype, can be observed with other categories of everyday objects. 
As a precursor to the current work, Panis, Vangeneugden, and Wagemans (in press) selected four exemplars from a single category that were perceived as being most dissimilar in a measured 2-D shape space, and 33% and 66% morphs were created between each possible pair for each of eight categories (car, vase, motorcycle, guitar, bird, fish, butterfly, beetle). Panis et al. (in press) found that perceived (i.e., rated) similarity decreased monotonically, while reaction time in a sequential entry-level matching paradigm increased monotonically with increasing transformational (morphing) distance between two exemplars from the same category. Crucially, the morphed, more central exemplars were rated on average as being more typical compared to the original, extreme exemplars, as was also found by Graf (2002). 
The goals of the current study were threefold. First, we investigated whether a sharp tuning for shape exists in human object-selective cortex using morphing between contours of exemplars from non-face object categories (Experiment 1). Second, we studied whether the central “average” plays a role in the coding of shape similarity (Experiment 2). In this second experiment, we explicitly controlled for the number of presentations of each unique exemplar to study whether the more typical, central morphed exemplars elicit a lower signal compared to the extreme original exemplars, as suggested by the norm-based encoding principle. Third, we examined possible differences between animals and tools in both experiments. A number of studies have found intriguing differences in the processing of animals and tools (e.g., Kiani, Esteky, Mirpour, & Tanaka, 2007; Martin, Wiggs, Ungerleider, & Haxby, 1996). Furthermore, it has been suggested that the outline of natural objects (e.g., animals) is more informative for recognition than the outline of artefactual objects (e.g., tools; Lloyd-Jones & Luckhurst, 2002; Riddoch & Humphreys, 2004; Wagemans et al., 2008). A stronger neuronal sensitivity to the exact shape of animal outlines could result from the fact that under natural viewing conditions, the outlines of animals are highly salient because of weak segmentation cues between the parts (e.g., covered with fur) and because animals tend to move against the background in a consistent orientation. In contrast to the more wholistic processing of animals, the recognition of tools is thought to rely more on part-based processing because there are strong segmentation cues between the parts and because tools typically do not move and can appear in different orientations (Riddoch & Humphreys, 2004). As a result, we might find a sharper shape tuning for the outlines of animals compared to tools. 
The current fMR-A results show that a small change in exemplar shape produces a large release of adaptation in object-related cortex, but only for animal outlines compared to tools. No differences in adaptation level were observed for exemplars occupying central versus extreme positions in shape space. 
Methods
Participants
Six right-handed volunteers participated in Experiment 1. Sixteen right-handed volunteers participated in Experiment 2. All signed a written informed consent paper. Each participant's scanning session lasted less than 2 hours. All had normal or corrected-to-normal vision and were paid 50 euros for their participation. 
Stimuli
Details on stimulus construction can be found in Panis et al. (in press). In short, contours were extracted from 257 line drawings of exemplars, belonging to 25 categories (Op de Beeck & Wagemans, 2001; see also Op de Beeck, Béatse, Wagemans, Sunaert, & Van Hecke, 2000). Similarity rating data on all intra-category combinations were obtained from an independent group of ten people. By applying multi-dimensional scaling (MDS), we found that for 11 categories a solution in two dimensions provided a good fit to the similarity data (Figure 1A). Intermediate stimuli between the four extremes in this 2-D psychological similarity-space were generated using commercial morphing software (Magic Morph, freeware). Alongside the source and target images (the four extremes in each category), 33% and 66% morphs were created for each of the six possible morph lines by applying a large number of anchor points (Figure 1B). The output was subsequently handled in Matlab (The MathWorks, USA) to extract a single-pixel black contour. In this way, we created for each of 11 categories 12 new exemplars (16 exemplars in total per category). Note that each of the four extremes served three times as source or target, so that each extreme stimulus has three morph lines associated with it. 
Figure 1
 
Illustration of stimulus construction. (A) Two-dimensional MDS solution applied to the similarity ratings of the outlines of the 11 car stimuli. The four encircled cars are used as the most extreme exemplars to be used to create morphing sequences from the (1) upper-left, (2) lower-left, (3) lower-right, and (4) upper-right corners of the shape space. (B) Six morphing sequences created between all pairs of the four selected within-category exemplars. From top to bottom are the source shape, the 33% morph (i.e., 33% source and 66% target), the 66% morph (i.e., 66% source and 33% target), and the target shape. From left to right are the six morphing sequences, using the following shapes as source and target, respectively: 1–2, 1–3, 1–4, 2–3, 2–4, and 3–4.
Figure 1
 
Illustration of stimulus construction. (A) Two-dimensional MDS solution applied to the similarity ratings of the outlines of the 11 car stimuli. The four encircled cars are used as the most extreme exemplars to be used to create morphing sequences from the (1) upper-left, (2) lower-left, (3) lower-right, and (4) upper-right corners of the shape space. (B) Six morphing sequences created between all pairs of the four selected within-category exemplars. From top to bottom are the source shape, the 33% morph (i.e., 33% source and 66% target), the 66% morph (i.e., 66% source and 33% target), and the target shape. From left to right are the six morphing sequences, using the following shapes as source and target, respectively: 1–2, 1–3, 1–4, 2–3, 2–4, and 3–4.
Using this morphing procedure, we could manipulate the similarity or transformational distance between pairs of stimuli systematically (Hahn, Chater, & Richardson, 2003). Behavioral similarity and typicality ratings, as well as categorization data, from independent groups of 20, 56, and 22 people, respectively, showed (1) that a longer transformational distance between two stimuli resulted in a linear decrease of the rated similarity, (2) that there was a linear increase in reaction time and error rate with increasing transformational distance between pairs of stimuli, when deciding whether both belong to the same category, and (3) that the morphed exemplars of each category were judged on average as more typical than the original ones (Panis et al., in press). 
Eight categories, four man-made tools (car, vase, motorcycle, guitar) and four animals (butterfly, beetle, fish, bird), were selected for this study ( Figure 2). Stimuli were projected onto a translucent screen attached inside the scanner, which the participants could see through a tilted mirror. The outlines were presented in black within a white horizontal box (14.5 × 11.5 visual degrees) on a black background. 
Figure 2
 
Examples of two morph lines from each category, which were used in Experiment 2.
Figure 2
 
Examples of two morph lines from each category, which were used in Experiment 2.
Experimental design
For Experiment 1, all six morph lines of each category were used and five sets of 18 pairs of stimuli were constructed for each category ( Figure 3). In the first set (Condition 0), each of the 16 exemplars of each category was paired with itself, and two randomly chosen pairs were duplicated. 
Figure 3
 
Illustration of the construction of sets of stimulus pairs in Experiment 1 for the category fish. (a) The 16 exemplars (left) and an illustration of the position of the six morph lines (right). The black dots represent the four extreme exemplars and the white dots represent the twelve morphed exemplars. (b) In the first set, exemplars were paired with themselves. (c) In the second set, extreme stimuli were paired with 33% versions. (d, e) In the third and fourth set, stimulus pairs were generated by pairing extreme stimuli with the 66% variants and the other extreme stimulus on each of the morph lines, respectively. For clarity, only one fourth of the arrows are drawn in d and e. See text for further details.
Figure 3
 
Illustration of the construction of sets of stimulus pairs in Experiment 1 for the category fish. (a) The 16 exemplars (left) and an illustration of the position of the six morph lines (right). The black dots represent the four extreme exemplars and the white dots represent the twelve morphed exemplars. (b) In the first set, exemplars were paired with themselves. (c) In the second set, extreme stimuli were paired with 33% versions. (d, e) In the third and fourth set, stimulus pairs were generated by pairing extreme stimuli with the 66% variants and the other extreme stimulus on each of the morph lines, respectively. For clarity, only one fourth of the arrows are drawn in d and e. See text for further details.
In the second set (Condition 33), 12 unique pairs were created by pairing each of the four extreme stimuli with the three 33% morphs on their morph lines and six randomly chosen pairs were duplicated. In the third (Condition 66) and the fourth set (Condition 100), 18 stimulus pairs were generated as in the second set, but using the 66% variants and the other extreme stimulus on each of the morph lines, respectively. Because the first stimulus in each pair was always one of the extreme stimuli in the second, third, and fourth set of trials, we reversed the order of nine randomly chosen pairs in these three sets. The fifth set (Condition different) was created by pairing each of the 16 exemplars (and two duplicated ones) randomly with one exemplar from the other categories. In this last set, we also reversed the order of half the pairs. Eighteen additional fixation trials were included, resulting in 108 trials per scan. In each trial, the two stimuli of a pair were shown successively (150-ms stimulus presentations and 200-ms ISI) while every trial was started by a pulse of the scanner, every third TR (3045 ms). Within each condition, assignment of stimuli pairs to the fixed order of trial types (see Procedure) was random. Note that the number of extreme and morphed stimuli is the same in condition 33 and 66, while condition 0 contains the lowest number of extreme stimuli and condition 100 contains no morphed exemplars. 
Experiment 2 was designed to measure whether there was any difference in activation between morphs and extremes while controlling for the number of presentations of each stimulus (which was not controlled in Experiment 1). For Experiment 2, only two morph lines per category were selected (both horizontal, both vertical, or both crossing ones; see Figure 3A, right) to end up with eight unique stimuli per category. Trial types were constructed so that each stimulus of each morph line is presented an equal number of times in a run. For each category, five sets of eight stimulus pairs were constructed ( Figure 4). In the first set (Condition same extreme), both the first and the last stimulus of the morph line was paired with itself (0–0 trials and 100–100 trials, respectively), and each pair was presented four times. In the second set (Condition same morph), the same was done for the second and third stimulus, and both the 33–33 trials and 66–66 trials were presented four times. In the third set (Condition 33), the first stimulus of the morph line was paired with the second one and the third stimulus with the fourth one; both possible orders were repeated once (two 0–33 trials and two 33–0 trials; two 66–100 trials and two 100–66 trials). 
Figure 4
 
Illustration of the construction of sets of stimulus pairs in Experiment 2 for one morph line of the category motorcycle. The four stimuli are labeled 0, 33, 66, and 100 to indicate their transformational distance from the first stimulus on the left. See text for further details.
Figure 4
 
Illustration of the construction of sets of stimulus pairs in Experiment 2 for one morph line of the category motorcycle. The four stimuli are labeled 0, 33, 66, and 100 to indicate their transformational distance from the first stimulus on the left. See text for further details.
In the fourth set (Conditon 66), the first stimulus was paired with the third one, and the second stimulus with the fourth one, in a similar way as in the third set. In the last set (Condition different), each stimulus was paired with a random stimulus from a morph line from another pseudo-random category, and each of these four pairs was repeated once. Across these 8 × 5 pairs, each of the four stimuli belonging to the same morph line is presented exactly 18 times. In addition, 16 fixation trials were included, resulting in 96 trials per scan. Each trial lasted 2 s and started with 350-ms presentation of the first stimulus of a pair, followed by a mask of 150 ms, the presentation of the second stimulus of the pair for 150 ms, and a mask for 1350 ms during which participants could respond (see Procedure). The first stimulus was presented longer than the second to increase within-trial adaptation. Between-trial adaptation was minimized by making sure that, when assigning stimulus pairs to the fixed trial type order (see Procedure), no stimulus was repeated across consecutive trials and by introducing a position shift between trials (both stimuli within a trial were centered on one of the corners of a square measuring 3 × 3 visual degrees, centered on the fixation point). 
Procedure
In each experiment, each participant was run in one session consisting of 13 scans; first eight event-related adaptation scans—one for each category, followed by four LOC-localizer scans, and finally an anatomical scan. The order of the eight categories (or eight adaptation scans) was counterbalanced across participants. In each experiment, trial type order in each scan was determined by a genetic algorithm (Wager & Nichols, 2003; http://www.columbia.edu/cu/psychology/tor/) optimized for one-back counterbalancing and for detecting differences in fMR signal between the conditions. Stimulus delivery, timing, and response detection were controlled by E-Prime (Psychology Software Tools, Inc.; http://www.pstnet.com/products/e-prime/) installed on a desktop that connected to the projector and the scanner. 
During each event-related adaptation scan, participants performed a same/different categorization task (sequential entry-level matching). In Experiment 1, each run contained 18 trials of each of five conditions: condtion-0, condition-33, condition-66, condition-100, and condition-different. Participants responded manually whether or not both objects belonged to the same category (two pneumatic buttons), and they were instructed to fixate a small cross that was continuously present. Reaction times were recorded for both responses. In Experiment 2, we used a slightly different task. At the beginning of each run, participants were told the name of the category of which exemplars would be shown. Participants had to press one pneumatic button when they detected each of the 16 “oddball” stimuli, i.e., the stimuli from the other categories that were presented in the Different (catch) trials (which will not be analyzed). No response was required on the within-category trials. Reaction times in the different trials were not recorded. Participants were instructed to respond accurately when the second stimulus in a trial had disappeared and to pay most attention to fixating a small cross that was continuously present in the middle of the screen. They detected the “oddball” stimulus in the different trials accurately (>90% correct). 
Block-design ROI localizer scans
A simple alternating block design was used for localizing object-related voxels. Stimuli were black 2-D contours of intact, novel and familiar objects, and their scrambled versions, the same stimuli as used before by Kourtzi and Kanwisher (2000). The grid used for scrambling was also present in the intact versions. Grid size measured 18.1 × 17.2 visual degrees. In each localizer scan, participants received two blocks of five conditions: fixation, familiar intact, novel intact, familiar scrambled and novel scrambled. Order of presentation of blocks was counterbalanced across participants and runs. Each block lasted 30 s and there were 9 s of fixation at the beginning and end of each scan. At the center of the screen a fixation cross was always present that changed its color at random times (∼0.33 Hz on average) within each block. Participants were instructed to detect these color changes during the LOC localizer scans to engage attention equally in every condition at the center of the screen and to use both hands alternatively. 
fMRI data collection
Scanning was carried out on a 3T Philips Intera scanner at the Radiology Unit of the University Hospital Gasthuisberg (UZ GHB) in Leuven, Belgium. For the ER adaptation scans, a fast event-related Echo Planar Imaging (FE-EPI) ascending sequence was used (326 dynamical scans in Experiment 1 and 194 dynamical scans in Experiment 2, TR = 1015 ms, FOV = 230 × 230 mm, TE = 34 ms, alpha = 66). Seventeen axial/transverse slices (slice thickness = 5 mm, gap = 0 mm) with an in-plane resolution of 1.8 × 1.8 mm were positioned to cover the ventral part of the brain, excluding the top of the brain. For the LOC-localizer scans, a whole-brain EPI ascending sequence was used (157 dynamical scans, TR = 3000 ms, FOV = 200 × 200 mm, TE = 30 ms, alpha = 90, 52 axial slices, slice thickness = 2.5 mm, in-plane resolution = 1.6 × 1.6 mm). The anatomical scan was a T1-weighted Turbo Field Echo sequence (TR = 9.68 ms, FOV = 250 × 250 mm, TE = 4.6 ms, alpha = 8, 182 slices, 1.2 mm thick slices, in-plane resolution = 1 × 1 mm). 
Data analysis
SPM2 ( http://www.fil.ion.ucl.ac.uk/spm/) was used to preprocess and analyze the imaging data of each experiment separately. Preprocessing consisted of realigning the functional volumes, slice-time correction with the middle slice as reference (only for the event-related scans), registration to the anatomical image, normalization into a standard space, and finally spatial smoothing (FWHM = 6 mm). For the adaptation data, a generalized linear model based on actual timing data was constructed in which the expected hemodynamic response changes were modeled using an informed basis-set (Worsley & Friston, 1995). This allows modeling for height, width, and latency of the signal when objects are presented. We included two parametric modulated regressors modeling the possible presence of linear and/or quadratic effects over time, which could result from inter-trial adaptation in LOC during each scan. The data from the LOC-localizer scans were modeled using a general linear model with a simple boxcar function for each condition. LOC was defined as the set of voxels more activated by intact than scrambled object contours, surviving an uncorrected threshold of alpha = .001 (see Figure 5). 
Figure 5
 
The inflated left hemisphere of one participant with the “object-scrambled” contrast overlaid, showing both subregions of the lateral occipital complex (LOC), i.e., the lateral occipital (LO) and posterior fusiform (PFS) or ventral occipitotemporal cortex (VOT).
Figure 5
 
The inflated left hemisphere of one participant with the “object-scrambled” contrast overlaid, showing both subregions of the lateral occipital complex (LOC), i.e., the lateral occipital (LO) and posterior fusiform (PFS) or ventral occipitotemporal cortex (VOT).
Freesurfer ( http://surfer.nmr.mgh.harvard.edu/) was used to flatten the normalized structural image of the brain of each participant and to overlay the LOC-contrast image (intact > scrambling). As in previous studies (Altmann, Deubelius, & Kourtzi, 2004; Grill-Spector et al., 1999; Grill-Spector, Kushnir, Hendler, & Malach, 2000), two LOC subregions were marked for both hemispheres separately, based on anatomical landmarks if the border was not clear visually: LO (lateral occipital) and VOT (ventral occipitotemporal or posterior fusiform; PFS). Using home-made software, we transformed the coordinates of the ROIs of each participant in Freesurfer to SPM2 coordinates (in MNI-space). In each experiment, a low-level V1/V2 ROI was constructed as the group of voxels (1) active in the all-effects contrast and (2) present in an ROI around the calcarine sulcus predefined in MarsBaR (http://marsbar.sourceforge.net/). MarsBaR was used to extract and average the time-courses of every voxel in each ROI, separately for each run (or category) and participant, and to estimate the parameters of the model. From the estimated parameters, we calculated the estimated percent signal change (PSC) for each effect as follows: canonical beta(effect) × 100 / beta(constant). 
The PSC data of both adaptation experiments were fitted separately using general linear mixed model or GLMM theory (Littell, Milliken, & Stroup, 1996) and the Mixed Procedure in SAS (SAS Institute Inc.). In both repeated-measures ANOVAs, the within-participant and fixed factors included condition (five levels in Experiment 1, four levels in Experiment 2), hemisphere (left, right), subregion (LO, VOT), type (animal, tool), and category-nested-within-type (four animal and four tool categories). Participants were modeled as a random factor. 
Results
Behavioral data
In Experiment 1, each participant received 720 trials in total (8 categories × 5 conditions × 18 trials). Because of practical problems with response collection and occasional presentation times of images of less than 100 ms, on average 114 trials per participant were considered invalid (15.9% of all trials; numbers for each participant: 28, 13, 155, 291, 182, 17). Only in 25 of the remaining trials an error was made, leaving 3609 trials with correct responses. For each participant, trials with an RT smaller or larger than 3 SDs from their mean were excluded, leaving 3560 valid trials (number of valid trials for each participant: 669, 697, 554, 422, 530, 688). 
An ANOVA on the RTs of the valid trials, with condition and category considered fixed and participant considered random, indicated a significant main effect of condition ( F(4,3520) = 21.93, p < .0001). An a priori test for a linear trend over the first four conditions was significant ( F(1,3520) = 6.48, p < .012), indicating longer RTs with increasing transformational distance between the exemplar contours (see Figure 6). The main effect of category was not significant ( F(7,1656) < 1). Although the interaction between condition and category was significant ( F(28,3520) = 2.07, p < .001), the effect of condition was significant in each category (see Table 1). 
Table 1
 
F- and p-values for the effect of condition on RTs within each category.
Table 1
 
F- and p-values for the effect of condition on RTs within each category.
Category F(4,3520) p
Car 5.52 .0002
Guitar 9.12 <.0001
Beetle 10.69 <.0001
Motorcycle 6.00 <.0001
Vase 10.34 <.0001
Fish 15.29 <.0001
Butterfly 10.98 <.0001
Bird 3.30 .0104
Figure 6
 
Mean reaction time as a function of condition in Experiment 1. Error bars indicate standard error of the mean.
Figure 6
 
Mean reaction time as a function of condition in Experiment 1. Error bars indicate standard error of the mean.
Imaging data
The ANOVA on the PSCs of Experiment 1 showed a significant main effect of condition ( F(4,20) = 7.32, p = .0008; Figure 7). Three a priori contrasts (Bonferroni corrected alpha = .0167) showed that PSC was lower in condition 0 than in condition 33 ( F(1,20) = 9.12, p = .0068), and that there was no difference between condition 33 and 66 ( F(1,20) = 1.42, p = .25) nor between condition 66 and 100 ( F(1,20) = 0.13, p = .72). 
Figure 7
 
PSC as a function of condition in Experiment 1.
Figure 7
 
PSC as a function of condition in Experiment 1.
However, there was also strong evidence for the interaction between condition and type ( F(4,730) = 3.39, p = .0093; Figure 8), but not for the main effect of type ( F(1,35) = 0.8, p = .38). The effect of type was only significant for condition different ( F(1,59.6) = 5.23, p = .0258), and the effect of condition was significant for animals ( F(4,26.1) = 8.73, p = .0001) and tools ( F(4,26.1) = 4.93, p = .0043). Tukey–Kramer corrected pairwise comparisons showed that PSC in condition 0 was significantly lower than in condition 33 for animals ( t(26.1) = −3.35, p = .0291), but not for tools ( t(26.1) = −2.30, p = .39), and that PSC in condition 33 was not different from that in condition 66 for animals ( t(26.1) = −1.18, p = .98) and tools ( t(26.1) = −1.05, p = .99). The effect of subregion was only marginally significant ( F(1,5) = 5.03, p = .075). 
Figure 8
 
PSC as a function of condition and type in Experiment 1.
Figure 8
 
PSC as a function of condition and type in Experiment 1.
The ANOVA on the PSCs of Experiment 2 showed a significant main effect of subregion ( F(1,15) = 4.85, p < .05) and condition ( F(3,45) = 16.36, p < .0001; Figure 9), but no significant interaction between subregion and condition ( F(3,1725) < 1, p = .87). Four a priori contrasts (Bonferroni corrected alpha = .0125) showed a difference between 0 extreme and 33 ( F(1,45) = 7.09, p = .0107), between 0 morph and 33 ( F(1,45) = 13.69, p = .0006), between 33 and 66 ( F(1,45) = 7.16, p = .0104), but no difference between 0 extreme and 0 morph ( F(1,45) = 1.08, p = .31). Thus, although the morphs were judged as being more typical for a category than the extreme stimuli (see Panis et al., in press), the fMRI data do not show any difference in the level of activation between morphs and extreme stimuli. 
Figure 9
 
PSC as a function of condition in Experiment 2.
Figure 9
 
PSC as a function of condition in Experiment 2.
However, as in Experiment 1, there was also strong evidence for the interaction between condition and type ( F(3,1725) = 3.87, p = .009; Figure 10), but only moderate evidence for the main effect of type ( F(1,15) = 3.89, p = .0674). The effect of type was significant in condition 33 ( F(1,22.2) = 8.59, p = .0077) and 66 ( F(1,22.2) = 4.32, p = .0495), and the effect of condition was significant for animals ( F(3,118) = 16.16, p < .0001) and tools ( F(3,118) = 6.92, p = .0002). Tukey–Kramer corrected pairwise comparisons showed (1) that PSC in condition 0-extreme was significantly lower than in condition 33 for animals ( t(118) = −4.14, p = .001), but not for tools ( t(118) = −0.04, p = 1); (2) that PSC in condition 0-morph was significantly lower than in condition 33 for animals ( t(118) = −4.2, p = .0007), but not for tools ( t(118) = −1.6, p = .75); (3) that PSC in condition 33 was not different from that in condition 66 for animals ( t(118) = 1.33, p = .89) nor for tools ( t(118) = 2.68, p = .081); and (4) that PSC in condition 0-extreme was not different from that in condition 0-morph for animals ( t(118) = 0.06, p = 1) nor for tools ( t(118) = 1.56, p = .77). 
Figure 10
 
PSC as a function of condition and type in Experiment 2.
Figure 10
 
PSC as a function of condition and type in Experiment 2.
Thus, just as in Experiment 1, we find a significant difference between condition 0 and condition 33 but only for animals. Previous studies (Gilaie-Dotan & Malach, 2007; Jiang et al., 2007) have used a strong release from adaptation with small stimulus differences combined with no further release from adaptation with larger stimulus differences as evidence that most neurons in the neuronal population are narrowly tuned for individual exemplar faces. To study directly whether the difference in adaptation level between condition 0 and condition 33 is larger than the difference between condition 33 and 66 for animals only, we proceeded as follows. First, we calculated the average PSC for each combination of participant, type, and condition, separately for each experiment. Next, we calculated two difference scores for each combination of participant and type by subtracting the PSC in condition 0 from the PSC in condition 33 (score “33–0”) and by subtracting the PSC in condition 33 from that in condition 66 (score “66–33”). Note that we calculated the PSC in condition 0 in Experiment 2 by averaging between condition 0-extreme and 0-morph. Next, we tested whether there were significant differences between both scores for the two types of categories, in both experiments separately and combined (Table 2). 
Table 2
 
Results of six paired t-tests comparing “33–0” with “66–33.”
Table 2
 
Results of six paired t-tests comparing “33–0” with “66–33.”
Experiment 1 ( N = 6) Experiment 2 ( N = 16) Experiments 1 and 2 ( N = 22)
Average p Average p Average p
Animals 33–0 0.685 0.1044 0.325 0.0397 0.423 0.0066
66–33 0.241 0.104 0.141
Tools 33–0 0.471 0.3695 0.064 0.1923 0.175 0.6903
66–33 0.215 0.224 0.221
Table 2 shows that, for animals, the 33–0 difference was higher than the 66–33 difference in each experiment, although it was only significant in Experiment 2 (paired t-test, t(15) = 2.25, p = .0397). No consistent nor significant differences between both scores were found for tools. When we combined the data of the 22 participants from both experiments together, the average 33–0 score for animals (0.42) was significantly higher than the 66–33 score (0.14), t(21) = 3.02, p = .0066. No difference between both scores was observed for tools. Finally, to compare tools and animals directly, we calculated the higher-order statistic Diff = ((33–0) − (66–33)) to get a single number per type per subject. A paired t-test comparing Diff for tools (mean −0.046) with Diff for animals (mean 0.2823) showed a significant difference ( t(21) = 2.5, p = .021). The 33–0 differences were also larger for animals (mean 0.42) compared to tools (mean 0.17; paired t-test, t(21) = 3.25, p = .0039). 
The other factors in each analysis (subregion, hemisphere, category-nested-within-type) were not involved in any interactions that showed consistent effects across both experiments. Crucially, in both experiments, the higher-order interactions containing the interaction between condition and type were not significant. 
To rule out alternative explanations in terms of low-level features, we tested whether similar effects could be found in the low-level V1 ROI in two separate analyses. The main effects of condition and type, and the interaction effect between condition and type were not significant in any analysis (smallest p = .15). The other factors in each analysis (hemisphere, category-nested-within-type) were not involved in any interactions that showed consistent effects across both experiments, and the higher-order interactions containing the interaction between condition and type were not significant. 
Thus, the results of both experiments are consistent with the presence of an asymptote around 33% in the recovery of adaptation with increasing transformational distance, but only for outlines of animals. In other words, there appears to be a sharp tuning for outline shape of animal exemplars. We wondered whether this result might simply reflect larger spaces spanned by the stimuli for the animals rather than a sharper tuning within equally sized spaces. To address this possibility, we performed two additional analyses. First, we measured the responses of the C2 units of the hierarchical object recognition model (HMAX) of Riesenhuber and Poggio (1999) to each outline stimulus employed in Experiment 2. The layer of C2 units is the last layer in this model that is “standard” and not adapted to the exact stimuli that are included in the stimulus set. 
HMAX model-based similarities between pairs of stimuli of the critical condition 33 employed in Experiment 2 were calculated as the Euclidean distance between both response patterns and averaged for each category. A t-test for independent samples showed no strong difference between animals and tools in average similarity ( t(6) = −0.9, p = .4). 
Second, Panis et al. (in press) collected similarity ratings for pairs of stimuli separated by a certain transformational distance. We calculated the average rated similarity for each combination of category (N = 8), morphing sequence (N = 6), and condition (0, 33, 66, 100). To test whether the rated similarity differs between tools and animals, we selected only the ratings of the critical condition 33 and compared the distribution of the 24 averages of animals (4 categories times 6 morphing sequences) with those of tools through a Friedman ANOVA. No significant difference was observed (chi-square = 0.6667, p < .41). 
Thus, no obvious perceptual differences were found between the shape changes in condition 33 between the animal and tool stimuli we used in Experiment 2. Therefore, it seems that the difference in fMR adaptation in shape-change condition 33 between tools and animals reflects sharper tuning in equally sized perceptual spaces and that this sharper tuning is not a simple reflection of the construction of invariant representations as it is modeled by Riesenhuber and Poggio (1999).1 
Discussion
In two event-related fMR-A experiments, we studied the representation of shape similarity in object-related cortex. Both experiments revealed a significant interaction between the amount of shape change (condition) and the type of category (animal versus tool) on the recovery of adaptation. Although Experiment 2 lacked the distance 100 condition to test for the amount of selectivity across a wider range of morphing distances, we conclude that the fine tuning for within-category differences in outline shape in object-related cortex is most prominent for animals compared to tools. Indeed, a 33% change in outline shape lead to a higher recovery of adaptation for animals compared to tools (e.g., Figure 10). 
Here we take the relative degree of fMRI adaptation as an index for the selectivity of the underlying neuronal population. However, the relationship between fMR adaptation, neural adaptation, and neural selectivity is still unclear. For example, Sawamura et al. (2006) suggested that the tuning of neuronal responses is broader than that estimated by fMR adaptation, at least at the level of single neurons. However, this study did not show that it is invalid to infer relative selectivity for two object differences from the relative degree of adaptation across the same neural population, which is what we are doing here (e.g., sharper selectivity for natural categories because of faster release from adaptation). Another possible problem is that recovery from adaptation reflects changes in the input to a specific neural population rather than its spike output, which to some degree is a problem for all BOLD fMRI studies (Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001). Thus, when we derive from our results that the neural selectivity in LOC is sharper for outline shape changes of animals compared to tools, then this difference might exist at the input level, at the output level, or at both. 
What could have caused this stronger neuronal sensitivity to the exact shape of animal outlines in LOC? A first possibility is that, during early visual categorization experience with real-life objects, the outlines of animals are more salient (weak part segmentation cues, movement, consistent orientation) than those of tools (Riddoch & Humphreys, 2004), leading to stronger responses to animal outlines. This higher selectivity for animal outlines may then underlie the behavioral findings that the outlines of natural objects are more informative for recognition compared with the outlines of artefacts (Lloyd-Jones & Luckhurst, 2002; Riddoch & Humphreys, 2004; Wagemans et al., 2008). A related explanation is that animals are more structurally similar compared to artefacts (Humphreys, Riddoch, & Quinlan, 1988). Therefore, we can expect that small differences in animal shapes are more relevant for categorization compared to tools. In that respect, it is interesting that we observed that an object recognition model that does not take this task relevance into account is also not able to account for the higher sensitivity for animals. 
Like animals, faces are a natural object category and our results are consistent with the fMR-A results of Gilaie-Dotan and Malach (2007) who found that 30% morphing of faces was sufficient to produce activation levels in face-related cortex that were not significantly different from activations to completely different faces. Although the distinction between natural and artefactual object categories is confounded by the living/non-living and non-rigid/rigid distinctions, in general the difference in the image statistics of natural and artefactual objects might underlie the fact that the IT population response quickly distinguishes between natural and artefactual objects as observed by Kiani et al. (2007), and the fact that faces and houses seem to be the categories that share the smallest number of visual features resulting in an apparent modular representation of faces and houses in fMRI studies (O'Toole et al., 2005). Nevertheless, for many studies comparing animals and tools or related distinctions, including ours, it is not clear which visual properties of animals and tools form the basis of the differences between these object types. More research is needed to pinpoint the exact difference. 
To study whether the object representations are tuned to a norm (or average), we performed a second experiment in which we contrasted only four within-category trials to be able to present each unique stimulus an equal number of times. Although the morphs were judged as being more typical for a category than the extreme stimuli (see Panis et al., in press), the fMRI data of Experiment 2 do not show any difference in the level of activation between morphs and extreme stimuli, contrary to the prediction of norm-based encoding models (e.g., Graf, 2002, 2006; Loffler et al., 2005). Although it is generally difficult to interpret nonsignificant effects, the difference between the two conditions was quite small and actually far from significant in contrast to other differences between conditions with the same number of observations, suggesting that lack of statistical power was not an issue. 
Although the task in each experiment required categorization of each object stimulus at the basic level (e.g., vase vs car), it is possible that norm-based encoding would be revealed using an explicit categorization task. However, differences in fMRI adaptation and differences in single-unit activity for “typical morphs and extreme category exemplars” has been observed in other studies that did not use a categorization task, but passive fixating (Kayaert, Biederman, Op de Beeck, & Vogels, 2005; Leopold et al., 2006) or a one-back same-different task on stimulus orientation for which the difference between exemplars was irrelevant (Loffler et al., 2005). Thus, while the morph versus exemplar difference was irrelevant for our scanner task, the same was true for all previous studies that showed a difference between extremes and prototypes. 
Why do the results of studies using a multidimensional face space with the average face at the center point to norm-based encoding? One possibility is that the results of those studies might have been confounded by adaptation effects. By design, such studies involve the repeated presentation of stimuli similar to (and including) the average face, which could lead to the selective adaptation of neurons selective for the average face, creating the impression of a neuronal population that responds little to the average face and increasingly more to faces different from the average (Jiang et al., 2007). 
Another possibility is that faces are indeed special because of the extensive experience humans have with faces. Humans are all face experts and face representations might therefore have evolved to incorporate a norm-based encoding principle. Indeed, while novel, unfamiliar object classes that are rated as having a more (dis)similar shape are associated with a more (dis)similar response pattern distributed across object-selective cortex, additional experience including learning about semantic associations might result in response patterns that are determined by other factors than perceived shape (Op de Beeck, Torfs, & Wagemans, 2007). Future studies could investigate whether norm-based coding is present for other objects of expertise. 
Finally, the interaction between shape change (condition) and hemisphere was not significant in any experiment. Thus, we did not observe clear hemispheric differences in the coding of shape similarity in contrast to previous fMR-A studies that concluded that the right hemisphere is more sensitive to exemplar changes (Koutstaal et al., 2001; Simons, Koutstaal, Prince, Wagner, & Schacter, 2003; Vuilleumier, Henson, Driver, & Dolan, 2002). To explain these fMR-A results, Grill-Spector (2001) hypothesized that either (1) the representation in the left hemisphere is more semantic compared with that in the right hemisphere or (2) that the representation in the left hemisphere is feature-based, whereas the representation in the right hemisphere is holistic. She concluded that further studies should control for the semantic and perceptual similarity between the stimuli in the different conditions. In the current experiments, we controlled for perceptual similarity between the stimuli by using morphed exemplars and for semantic similarity by presenting the within-category trials of each category in a single run. Furthermore, we tried to minimize the influence of feedback from semantic representations and attentional generators, by presenting the name of the relevant category at the beginning of each run in Experiment 2. Thus, a possible reason why these previous studies, using different exemplars from the same category that were visually much more dissimilar from each other (e.g., different shapes, colors, textures, etc.) compared to our morphed exemplar outlines, observed strong exemplar sensitivity in the right hemisphere, might be that semantic and lexical category information is typically processed in the left hemisphere, while spatial attention is generated in the right hemisphere. During the process of recognizing those unpredictable, and visually very dissimilar category exemplars, feedback from semantic and lexical representations in the left hemisphere to shape representations might lead to the observed category selectivity or generalization across exemplars, while feedback from spatial attentional processes in the right hemisphere to shape representations might have lead to the observed exemplar selectivity (see also Mahon et al., 2007; Seger et al., 2000). 
In conclusion, just as with faces, there appears to be a narrower tuning for the shape of animal exemplars compared to man-made tools, in human object-selective cortex, consistent with the idea that during natural viewing conditions, the outlines of natural objects are more salient compared to the outlines of artefactual objects. More generally, the current results are consistent with the idea that LOC neurons do not extract a prototype (Vogels, 1999b), but that the appearance of a shape, or better its similarity with stored object information, is represented by a pattern of activity across exemplar-tuned neurons in ventral occipitotemporal cortex (Edelman, 1998, 1999). Deciding about categorization and identification then depends on frontal cortex (Freedman, Riesenhuber, Poggio, & Miller, 2002; Spratling & Johnson, 2006) which might guide the retrieval of familiar, frequently experienced exemplars, resulting in a context- and time-dependent coding of shape similarity (Lamberts, 2000; Nosofsky, 1986, 1988). 
Acknowledgments
This research was supported by a research grant from the Fund for Scientific Research (FWO Flanders, G.0218.06). This study is also part of larger research programs with financial support from the University Research Council (GOA/2005/03-TBA and IDO/02/004). 
We want to thank Ronald Peeters, Paul Van Hecke, and Stefan Sunaert of the Radiology Department at the University Hospital Gasthuisberg in Leuven for help and advice with scanning, Wouter De Baene, Céline Gillebert, Bart Ons, Greet Kayaert, and Rufin Vogels for interesting discussions regarding this study, and two anonymous reviewers for their helpful comments on the previous version of this paper. 
Commercial relationships: none. 
Corresponding author: Johan Wagemans. 
Email: johan.wagemans@psy.kuleuven.be. 
Address: University of Leuven, Department of Psychology, Tiensestraat 102, B-3000 Leuven, Belgium. 
Footnote
Footnotes
1  Similarly, the largest difference in the fMRI adaptation between the 0% and 33% morphs does not reflect the higher perceptual dissimilarity between these stimuli than other morphs (e.g., between 33% and 66% morphs). Panis et al. (in press) collected similarity ratings and reaction time data in a same–different sequential matching paradigm, for stimulus pairs separated by a certain transformational distance. They found that the average similarity rating decreased steadily with increasing transformational distance and that the average reaction time increased steadily with increasing transformational distance.
References
Altmann, C. F. Deubelius, A. Kourtzi, Z. (2004). Shape saliency modulates contextual processing in the human lateral occipital complex. Journal of Cognitive Neuroscience, 16, 794–804. [PubMed] [CrossRef] [PubMed]
Brincat, S. L. Connor, C. E. (2004). Underlying principles of visual shape selectivity in posterior inferotemporal cortex. Nature Neuroscience, 7, 880–886. [PubMed] [CrossRef] [PubMed]
Brincat, S. L. Connor, C. E. (2006). Dynamic shape synthesis in posterior inferotemporal cortex. Neuron, 49, 17–24. [PubMed] [Article] [CrossRef] [PubMed]
Connor, C. E. Brincat, S. L. Pasupathy, A. (2007). Transformation of shape information in the ventral pathway. Current Opinion in Neurobiology, 17, 140–147. [PubMed] [CrossRef] [PubMed]
DiCarlo, J. J. Cox, D. D. (2007). Untangling invariant object recognition. Trends in Cognitive Sciences, 11, 333–341. [PubMed] [CrossRef] [PubMed]
Downing, P. E. Jiang, Y. Shuman, M. Kanwisher, N. (2001). A cortical area selective for visual processing of the human body. Science, 293, 2470–2473. [PubMed] [CrossRef] [PubMed]
Edelman, S. (1998). Representation is representation of similarities. Behavioral and Brain Sciences, 21, 449–467. [PubMed] [PubMed]
Edelman, S. (1999). Representation and recognition in vision. Cambridge, MA: MIT Press.
Epstein, R. Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature, 392, 598–601. [PubMed] [CrossRef] [PubMed]
Freedman, D. J. Riesenhuber, M. Poggio, T. Miller, E. K. (2002). Visual categorization and the primate prefrontal cortex: Neurophysiology and behavior. Journal of Neurophysiology, 88, 929–941. [PubMed] [Article] [PubMed]
Gauthier, I. Palmeri, T. J. (2002). Visual neurons: Categorization-based selectivity. Current Biology, 12, R282–R284. [PubMed] [Article] [CrossRef] [PubMed]
Giese, M. A. Leopold, D. A. (2005). Physiologically inspired neural model for the encoding of face spaces.Neurocomputing, 65–66, 93–101 [CrossRef]
Gilaie-Dotan, S. Malach, R. (2007). Sub-exemplar shape tuning in human face-related areas. Cerebral Cortex, 17, 325–338. [PubMed] [Article] [CrossRef] [PubMed]
Graf, M. (2002). Form, space and object. Geometrical transformations in object recognition and categorization.
Graf, M. (2006). Coordinate transformations in object recognition. Psychological Bulletin, 132, 920–945. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. (2001). Semantic versus perceptual priming in fusiform cortex. Trends in Cognitive Sciences, 5, 227–228. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Henson, R. Martin, A. (2006). Repetition and the brain: Neural models of stimulus-specific effects. Trends in Cognitive Sciences, 10, 14–23. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Edelman, S. Avidan, G. Itzchak, Y. Malach, R. (1999). Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron, 24, 187–203. [PubMed] [Article] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Edelman, S. Itzchak, Y. Malach, R. (1998). Cue-invariant activation in object-related areas of the human occipital lobe. Neuron, 21, 191–202. [PubMed] [Article] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Hendler, T. Edelman, S. Itzchak, Y. Malach, R. (1998). A sequence of object-processing stages revealed by fMRI in the human occipital lobe. Human Brain Mapping, 6, 316–328. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Hendler, T. Malach, R. (2000). The dynamics of object-selective activation correlate with recognition performance in humans. Nature Neuroscience, 3, 837–843. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Malach, R. (2001). fMR-adaptation: A tool for studying the functional properties of human cortical neurons. Acta Psychologica, 107, 293–321. [PubMed] [CrossRef] [PubMed]
Hahn, U. Chater, N. Richardson, L. B. (2003). Similarity as transformation. Cognition, 87, 1–32. [PubMed] [CrossRef] [PubMed]
Humphreys, G. W. Riddoch, M. J. Quinlan, P. T. (1988). Cascade processes in picture identification. Cognitive Neuropsychology, 5, 67–103. [CrossRef]
Jiang, X. Blanz, V. Riesenhuber, M. (2007). fMRI and behavioral evidence against a “norm-based” face representation [Abstract]. Journal of Vision, 7, (9):888, [CrossRef]
Jiang, X. Rosen, E. Zeffiro, T. Vanmeter, J. Blanz, V. Riesenhuber, M. (2006). Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques. Neuron, 50, 159–172. [PubMed] [Article] [CrossRef] [PubMed]
Kanwisher, N. McDermott, J. Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for perception. Journal of Neuroscience, 17, 4302–4311. [PubMed] [Article] [PubMed]
Kayaert, G. Biederman, I. Op de Beeck, H. P. Vogels, R. (2005). Tuning for shape dimensions in macaque inferior temporal cortex. European Journal of Neuroscience, 22, 212–224. [PubMed] [CrossRef] [PubMed]
Kiani, R. Esteky, H. Mirpour, K. Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97, 4296–4309. [PubMed] [Article] [CrossRef] [PubMed]
Kourtzi, Z. Kanwisher, N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20, 3310–3318. [PubMed] [Article] [PubMed]
Kourtzi, Z. Kanwisher, N. (2001). Representation of perceived object shape by the human lateral occipital complex. Science, 293, 1506–1509. [PubMed] [CrossRef] [PubMed]
Koutstaal, W. Wagner, A. D. Rotte, M. Maril, A. Buckner, R. L. Schacter, D. L. (2001). Perceptual specificity in visual object priming: Functional magnetic resonance imaging evidence for a laterality difference in fusiform cortex. Neuropsychologia, 39, 184–199. [PubMed] [CrossRef] [PubMed]
Lamberts, K. (2000). Information-accumulation theory of speeded categorization. Psychological Review, 107, 227–260. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. Bondar, I. V. Giese, M. A. (2006). Norm-based face encoding by single neurons in the monkey inferotemporal cortex. Nature, 442, 572–575. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. O'Toole, A. J. Vetter, T. Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [PubMed] [CrossRef] [PubMed]
Lerner, Y. Hendler, T. Ben-Bashat, D. Harel, M. Malach, R. (2001). A hierarchical axis of object processing stages in the human visual cortex. Cerebral Cortex, 11, 287–297. [PubMed] [Article] [CrossRef] [PubMed]
Littell, R. C. Milliken, G. A. Stroup, W. W. (1996). SAS system for mixed models.
Lloyd-Jones, T. J. Luckhurst, L. (2002). Outline shape is a mediator of object recognition that is particularly important for living things. Memory & Cognition, 30, 489–498. [PubMed] [CrossRef] [PubMed]
Loffler, G. Yourganov, G. Wilkinson, F. Wilson, H. R. (2005). fMRI evidence for the neural representation of faces. Nature Neuroscience, 8, 1386–1390. [PubMed] [CrossRef] [PubMed]
Logothetis, N. K. Pauls, J. Augath, M. Trinath, T. Oeltermann, A. (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature, 412, 150–157. [PubMed] [CrossRef] [PubMed]
Logothetis, N. K. Sheinberg, D. L. (1996). Visual object recognition. Annual Review of Neuroscience, 19, 577–621. [PubMed] [CrossRef] [PubMed]
Mahon, B. Z. Milleville, S. C. Negri, G. A. Rumiati, R. I. Caramazza, A. Martin, A. (2007). Action-related properties shape object representations in the ventral stream. Neuron, 55, 507–520. [PubMed] [Article] [CrossRef] [PubMed]
Malach, R. Reppas, J. B. Benson, R. R. Kwong, K. K. Jiang, H. Kennedy, W. A. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 8135–8139. [PubMed] [Article] [CrossRef] [PubMed]
Martin, A. Wiggs, C. L. Ungerleider, L. G. Haxby, J. V. (1996). Neural correlates of category-specific knowledge. Nature, 379, 649–652. [PubMed] [CrossRef] [PubMed]
Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39–61. [PubMed] [CrossRef] [PubMed]
Nosofsky, R. M. (1988). Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 700–708. [CrossRef]
Op de Beeck, H. Béatse, E. Wagemans, J. Sunaert, S. Van Hecke, P. (2000). The representation of shape in the context of visual object categorization tasks. Neuroimage, 12, 28–40. [PubMed] [CrossRef] [PubMed]
Op de Beeck, H. P. Torfs, K. Wagemans, J. (2007). Shape similarity is an organizing principle in human object-selective cortex..
Op de Beeck, H. Wagemans, J. (2001). Visual object categorization at distinct levels of abstraction: A new stimulus set. Perception, 30, 1337–1361. [PubMed] [CrossRef] [PubMed]
Op de Beeck, H. Wagemans, J. Vogels, R. (2001). Inferotemporal neurons represent low-dimensional configurations of parameterized shapes. Nature Neuroscience, 4, 1244–1252. [PubMed] [CrossRef] [PubMed]
O'Toole, A. J. Jiang, F. Abdi, H. Haxby, J. V. (2005). Partially distributed representations of objects and faces in ventral temporal cortex. Journal of Cognitive Neuroscience, 17, 580–590. [PubMed] [CrossRef] [PubMed]
Palmeri, T. J. Gauthier, I. (2004). Visual object understanding. Nature Reviews, Neuroscience, 5, 291–303. [PubMed] [CrossRef]
Panis, S. Vangeneugden, J. Wagemans, J. (in press). Perception.
Pasupathy, A. Connor, C. E. (2001). Shape representation in area V4: Position-specific tuning for boundary conformation. Journal of Neurophysiology, 86, 2505–2519. [PubMed] [Article] [PubMed]
Pasupathy, A. Connor, C. E. (2002). Population coding of shape in area V4. Nature Neuroscience, 5, 1332–1338. [PubMed] [CrossRef] [PubMed]
Poggio, T. Bizzi, E. (2004). Generalization in vision and motor control. Nature, 431, 768–774. [PubMed] [CrossRef] [PubMed]
Riddoch, M. J. Humphreys, G. W. (2004). Object identification in simultanagnosia: When wholes are not the sum of their parts. Cognitive Neuropsychology, 21, 423–441. [CrossRef] [PubMed]
Riesenhuber, M. Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2, 1019–1025. [PubMed] [CrossRef] [PubMed]
Rolls, E. T. Deco, G. (2002). Computational neuroscience of vision. New York: Oxford University Press.
Sary, S. G. Vogels, R. Orban, G. A. (1993). Cue-invariant shape selectivity of macaque inferotemporal neurons. Science, 260, 995–997. [CrossRef] [PubMed]
Sawamura, H. Georgieva, S. Vogels, R. Vanduffel, W. Orban, G. A. (2005). Using functional magnetic resonance imaging to assess adaptation and size invariance of shape processing by humans and monkeys. Journal of Neuroscience, 25, 4294–4306. [PubMed] [Article] [CrossRef] [PubMed]
Sawamura, H. Orban, G. A. Vogels, R. (2006). Selectivity of neuronal adaptation does not match response selectivity: A single-cell study of the fMRI adaptation paradigm. Neuron, 49, 307–318. [PubMed] [Article] [CrossRef] [PubMed]
Sayres, R. Grill-Spector, K. (2006). Object-selective cortex exhibits performance-independent repetition suppression. Journal of Neurophysiology, 95, 995–1007. [PubMed] [Article] [CrossRef] [PubMed]
Seger, C. A. Poldrack, R. A. Prabhakaran, V. Zhao, M. Glover, G. H. Gabrieli, J. D. (2000). Hemispheric asymmetries and individual differences in visual concept learning as measured by functional MRI. Neuropsychologia, 38, 1316–1324. [PubMed] [CrossRef] [PubMed]
Simons, J. S. Koutstaal, W. Prince, S. Wagner, A. D. Schacter, D. L. (2003). Neural mechanisms of visual object priming: Evidence for perceptual and semantic distinctions in fusiform cortex. Neuroimage, 19, 613–626. [PubMed] [CrossRef] [PubMed]
Spratling, M. W. Johnson, M. H. (2006). A feedback model of perceptual learning and categorization. Visual Cognition, 13, 129–165. [CrossRef]
Tanaka, K. (1993). Neuronal mechanisms of object recognition. Science, 262, 685–688. [PubMed] [CrossRef] [PubMed]
Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19, 109–139. [PubMed] [CrossRef] [PubMed]
Tanaka, K. (2000). Mechanisms of visual object recognition studied in monkeys. Spatial Vision, 13, 147–163. [PubMed] [CrossRef] [PubMed]
Tanaka, K. (2003). Columns for complex visual object featuress in the inferotemporal cortex: Clustering of cells with similar but slightly different stimulus selectivities. Cerebral Cortex, 13, 90–99. [PubMed] [Article] [CrossRef] [PubMed]
Vanrie, J. Béatse, E. Wagemans, J. Sunaert, S. Van Hecke, P. (2002). Mental rotation versus invariant features in object perception from different viewpoints: An fMRI study. Neuropsychologia, 40, 917–930. [PubMed] [CrossRef] [PubMed]
Vogels, R. (1999a). Effect of image scrambling on inferior temporal cortical responses. Neuroreport, 10, 1811–1816. [PubMed] [CrossRef]
Vogels, R. (1999b). Categorization of complex visual images by rhesus monkeys Part 2: Single-cell study. European Journal of Neuroscience, 11, 1239–1255. [PubMed] [CrossRef]
Vogels, R. Orban, G. A. (1996). Coding of stimulus invariances by inferior temporal neurons. Progress in Brain Research, 112, 195–211. [PubMed] [PubMed]
Vuilleumier, P. Henson, R. N. Driver, J. Dolan, R. J. (2002). Multiple levels of visual object constancy revealed by event-related fMRI of repetition priming. Nature Neuroscience, 5, 491–499. [PubMed] [CrossRef] [PubMed]
Wagemans, J. De Winter, J. Op de Beeck, H. Ploeger, A. Beckers, T. Vanroose, P. (2008). Identification of everyday objects on the basis of silhouette and outline versions. Perception, 37, 207–244. [PubMed] [CrossRef] [PubMed]
Wager, T. D. Nichols, T. E. (2003). Optimization of experimental design in fMRI: A general framework using a genetic algorithm. Neuroimage, 18, 293–309. [PubMed] [CrossRef] [PubMed]
Wang, Y. Fujita, I. Murayama, Y. (2000). Neuronal mechanisms of selectivity for object features revealed by blocking inhibition in inferotemporal cortex. Nature Neuroscience, 3, 807–813. [PubMed] [CrossRef] [PubMed]
Wang, Y. Fujita, I. Murayama, Y. (2003). Coding of visual patterns and textures in monkey inferior temporal cortex. Neuroreport, 14, 453–457. [PubMed] [CrossRef] [PubMed]
Wilson, H. R. Loffler, G. Wilkinson, F. (2002). Synthetic faces, face cubes, and the geometry of face space. Vision Research, 42, 2909–2923. [PubMed] [CrossRef] [PubMed]
Worsley, K. J. Friston, K. J. (1995). Analysis of fMRI time‐series revisited—again. Neuroimage, 2, 173–181. [PubMed] [CrossRef] [PubMed]
Yovel, G. Kanwisher, N. (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15, 2256–2262. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
Illustration of stimulus construction. (A) Two-dimensional MDS solution applied to the similarity ratings of the outlines of the 11 car stimuli. The four encircled cars are used as the most extreme exemplars to be used to create morphing sequences from the (1) upper-left, (2) lower-left, (3) lower-right, and (4) upper-right corners of the shape space. (B) Six morphing sequences created between all pairs of the four selected within-category exemplars. From top to bottom are the source shape, the 33% morph (i.e., 33% source and 66% target), the 66% morph (i.e., 66% source and 33% target), and the target shape. From left to right are the six morphing sequences, using the following shapes as source and target, respectively: 1–2, 1–3, 1–4, 2–3, 2–4, and 3–4.
Figure 1
 
Illustration of stimulus construction. (A) Two-dimensional MDS solution applied to the similarity ratings of the outlines of the 11 car stimuli. The four encircled cars are used as the most extreme exemplars to be used to create morphing sequences from the (1) upper-left, (2) lower-left, (3) lower-right, and (4) upper-right corners of the shape space. (B) Six morphing sequences created between all pairs of the four selected within-category exemplars. From top to bottom are the source shape, the 33% morph (i.e., 33% source and 66% target), the 66% morph (i.e., 66% source and 33% target), and the target shape. From left to right are the six morphing sequences, using the following shapes as source and target, respectively: 1–2, 1–3, 1–4, 2–3, 2–4, and 3–4.
Figure 2
 
Examples of two morph lines from each category, which were used in Experiment 2.
Figure 2
 
Examples of two morph lines from each category, which were used in Experiment 2.
Figure 3
 
Illustration of the construction of sets of stimulus pairs in Experiment 1 for the category fish. (a) The 16 exemplars (left) and an illustration of the position of the six morph lines (right). The black dots represent the four extreme exemplars and the white dots represent the twelve morphed exemplars. (b) In the first set, exemplars were paired with themselves. (c) In the second set, extreme stimuli were paired with 33% versions. (d, e) In the third and fourth set, stimulus pairs were generated by pairing extreme stimuli with the 66% variants and the other extreme stimulus on each of the morph lines, respectively. For clarity, only one fourth of the arrows are drawn in d and e. See text for further details.
Figure 3
 
Illustration of the construction of sets of stimulus pairs in Experiment 1 for the category fish. (a) The 16 exemplars (left) and an illustration of the position of the six morph lines (right). The black dots represent the four extreme exemplars and the white dots represent the twelve morphed exemplars. (b) In the first set, exemplars were paired with themselves. (c) In the second set, extreme stimuli were paired with 33% versions. (d, e) In the third and fourth set, stimulus pairs were generated by pairing extreme stimuli with the 66% variants and the other extreme stimulus on each of the morph lines, respectively. For clarity, only one fourth of the arrows are drawn in d and e. See text for further details.
Figure 4
 
Illustration of the construction of sets of stimulus pairs in Experiment 2 for one morph line of the category motorcycle. The four stimuli are labeled 0, 33, 66, and 100 to indicate their transformational distance from the first stimulus on the left. See text for further details.
Figure 4
 
Illustration of the construction of sets of stimulus pairs in Experiment 2 for one morph line of the category motorcycle. The four stimuli are labeled 0, 33, 66, and 100 to indicate their transformational distance from the first stimulus on the left. See text for further details.
Figure 5
 
The inflated left hemisphere of one participant with the “object-scrambled” contrast overlaid, showing both subregions of the lateral occipital complex (LOC), i.e., the lateral occipital (LO) and posterior fusiform (PFS) or ventral occipitotemporal cortex (VOT).
Figure 5
 
The inflated left hemisphere of one participant with the “object-scrambled” contrast overlaid, showing both subregions of the lateral occipital complex (LOC), i.e., the lateral occipital (LO) and posterior fusiform (PFS) or ventral occipitotemporal cortex (VOT).
Figure 6
 
Mean reaction time as a function of condition in Experiment 1. Error bars indicate standard error of the mean.
Figure 6
 
Mean reaction time as a function of condition in Experiment 1. Error bars indicate standard error of the mean.
Figure 7
 
PSC as a function of condition in Experiment 1.
Figure 7
 
PSC as a function of condition in Experiment 1.
Figure 8
 
PSC as a function of condition and type in Experiment 1.
Figure 8
 
PSC as a function of condition and type in Experiment 1.
Figure 9
 
PSC as a function of condition in Experiment 2.
Figure 9
 
PSC as a function of condition in Experiment 2.
Figure 10
 
PSC as a function of condition and type in Experiment 2.
Figure 10
 
PSC as a function of condition and type in Experiment 2.
Table 1
 
F- and p-values for the effect of condition on RTs within each category.
Table 1
 
F- and p-values for the effect of condition on RTs within each category.
Category F(4,3520) p
Car 5.52 .0002
Guitar 9.12 <.0001
Beetle 10.69 <.0001
Motorcycle 6.00 <.0001
Vase 10.34 <.0001
Fish 15.29 <.0001
Butterfly 10.98 <.0001
Bird 3.30 .0104
Table 2
 
Results of six paired t-tests comparing “33–0” with “66–33.”
Table 2
 
Results of six paired t-tests comparing “33–0” with “66–33.”
Experiment 1 ( N = 6) Experiment 2 ( N = 16) Experiments 1 and 2 ( N = 22)
Average p Average p Average p
Animals 33–0 0.685 0.1044 0.325 0.0397 0.423 0.0066
66–33 0.241 0.104 0.141
Tools 33–0 0.471 0.3695 0.064 0.1923 0.175 0.6903
66–33 0.215 0.224 0.221
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×