Free
Article  |   June 2013
Memory color of natural familiar objects: Effects of surface texture and 3-D shape
Author Affiliations
  • Milena Vurro
    Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
    [email protected]
  • Yazhu Ling
    Apple Inc., Cupertino, CA, USA
    [email protected]
  • Anya C. Hurlbert
    Institute of Neuroscience, Faculty of Medical Sciences, Newcastle University, UK
    [email protected]
Journal of Vision June 2013, Vol.13, 20. doi:https://doi.org/10.1167/13.7.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Milena Vurro, Yazhu Ling, Anya C. Hurlbert; Memory color of natural familiar objects: Effects of surface texture and 3-D shape. Journal of Vision 2013;13(7):20. https://doi.org/10.1167/13.7.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.

Introduction
When color is diagnostic for an object or scene—for example, a yellow banana or a green forest scene—its presence improves visual recognition. Specifically, diagnostic color enhances performance across a broad range of tasks, including subordinate-level object recognition (Tanaka & Presnell, 1999; Tanaka, Weiskopf, & Williams, 2001), object/scene-naming speed and accuracy (Naor-Raz, Tarr, & Kersten, 2003; Oliva & Schyns, 2000; Ostergaard & Davidoff, 1985, 1988; Rossion & Pourtois, 2004; Therriault, Yaxley, & Zwaan, 2009), visual search (Ehinger & Brockmole, 2008), and face recognition (Yip & Sinha, 2002). This enhancement relies on the prototypical color of the object or scene being stored in a mental representation, so that it can be called upon for a match to the incoming stimulus. Hering (1874) called this mental representation the memory color of a familiar object and postulated that it was automatically established through frequent experience of actual instances of the object: “the color in which we have oftenest seen an external thing … a fixed characteristic of the memory image …”(Hering, 1874). 
Hering's hypothesis of a single fixed memory color for familiar objects has been validated with a variety of color-matching experiments (Bartleson, 1960; Bruner, Postman, & Rodrigures, 1951; Delk & Fillenbaum, 1965; Duncker, 1939; Humphrey, Goodale, Jakobson, & Servos, 1994; Newhall, Burnham, & Clark, 1957; Siple & Springer, 1983; Vurro, Ling, & Hurlbert, 2007; Yendrikhovskij, Blommaert, & de Ridder, 1999); generally, the results indicate that memory color is not perfect and tends to be both exaggerated towards the dominant hue as well as higher in saturation (Bartleson, 1960, 1961; Perez-Carpinell, Baldovi, de Fez, & Castro, 1998; Vurro et al., 2007; Yendrikhovskij et al., 1999). 
Hering (1874) also proposed that memory color affects immediate perception; for example, he suggested that in a “fleeting glance,” the memory color of an object may replace the actual stimulus color of a familiar object, particularly if it were relatively unattended or under an unusual illumination. The extent to which this postulate holds is still under debate, despite recent supporting evidence from low-level measurements of neutral chromaticity points (Hansen, Olkkonen, Walter, & Gegenfurtner, 2006) and high-level classification of ambiguous hues (Mitterer & de Ruiter, 2008). 
One reason for the uncertainty over the extent to which memory color affects immediate perception is that it depends on the attributes of the stimulus that elicits it. Natural objects are characterized by multiple visual attributes—by color, texture, shape, and motion, and, also, at a another level, by surface attributes such as translucency and gloss—and it is plausible that memory color is activated more effectively when other visual attributes associated with the object are also present. 
Taking surface texture into consideration first, most natural familiar objects—such as a mottled ripe banana or speckled greenish pear—possess chromatically variegated surfaces due to intrinsic factors such as pigment inhomogeneities or surface roughness, as opposed to extrinsic factors such as illumination or scene configuration. This surface polychromaticity, when plotted in cone-contrast space, gives rise to distinctive object-specific chromatic signatures, e.g., for bananas, lemons, zucchini (Hurlbert, Vurro, & Ling, 2008; Ling, Vurro, & Hurlbert, 2008). Moreover, these signatures transform predictably under changes in illumination, a characteristic that may aid color constancy of natural objects (Hurlbert et al., 2008; Ling et al., 2008). Based on these observations, we suggest that memory effects might be modulated by polychromaticity. Yet memory color is typically studied using uniformly colored stimuli (Bartleson, 1960; Bruner et al., 1951; Delk & Fillenbaum, 1965; Duncker, 1939; Humphrey et al., 1994; Newhall et al., 1957; Siple & Springer, 1983) with a few recent exceptions that employ photographs of natural familiar objects (Hansen et al., 2006; Olkkonen, Hansen, & Gegenfurtner, 2008; Yendrikhovskij et al., 1999). Olkkonen et al. (2008), for example, found that achromatic settings for 2-D images of familiar objects deviated more from neutral in the presence of luminance shading or combined luminance-shading and surface markings, relative to uniform surfaces. None of these latter works examine polychromaticity per se, and instead confound intrinsic and extrinsic factors in the stimulus representation, particularly with respect to luminance variations, making it difficult to establish the contribution of intrinsic polychromaticity to the findings. A primary aim of this study is to assess the contribution of intrinsic surface polychromaticity to the measurement of memory color of natural familiar objects. 
The intrinsic surface polychromaticity of natural objects may also contribute to an inherent variability in the memory color representation itself, which in turn may contribute to the observed variability in the immediate effectiveness of memory color. In previous works (Hurlbert et al., 2008; Ling et al., 2008; Vurro, Ling, & Hurlbert, 2009), we have proposed that, within individuals, the memory color associated with a familiar object is not a single color, but a range of colors. We suggest that this range arises more from the intrinsic variations in surface color within a single object than from the variation in surface color between objects of given type. In this study, we will examine not only the accuracy of memory color but also its range. 
Shape is another cue to object identity that has been frequently investigated, but its contribution to memory color is still debated. Duncker (1939) first emphasized the importance of shape in eliciting memory color by demonstrating that a desaturated leaf shape (2-D contour) appeared greener than a donkey shape, even when both shapes were cut from the same (green) paper and lit by the same (reddish) illumination. Other studies have concluded the opposite, that memory color effects are strongest when stimulus information is limited (Bolles, Hulicka, & Hanly, 1959; Bruner et al., 1951) or, more explicitly, that there is no effect of stimulus shape on memory color (Siple & Springer, 1983). Olkkonen et al. (2008) suggest that the influence of 2-D contour cues alone may be minimal: When observers were asked to adjust the color of a 2-D uniform stimulus to appear neutral, no significant difference was found between stimulus shapes (natural object contours vs. disks). This result is in line with recent results (Ling & Hurlbert, 2005; Ling, Vurro, & Hurlbert, in press) demonstrating that mean memory colors—measured with a yes-no paradigm—do not differ for uniformly colored 2-D disks and 2-D natural object contours. These same studies, though, demonstrated that the intra-individual range of memory colors is strongly modulated by stimulus shape. It may be that the effect sizes with respect to mean memory colors are too small to be measured with coarse-resolution techniques. Therefore, a second goal of this work is to examine the effect of the 2-D contour of natural objects using a different finer-resolution method to measure memory color. 
Lastly, it is important to distinguish between 2-D and 3-D cues to object shape in considering their effects on memory color. The former include object boundary contours and luminance gradients presented in a two-dimensional image and the latter binocular disparities and motion parallax, which arise naturally from real three-dimensional stimuli. Psychophysical studies have demonstrated that 3-D cues alter color appearance in various tasks (Bloj, Kersten, & Hurlbert, 1999; Ling & Hurlbert, 2004; Shevell & Miller, 1996; Yamauchi & Uchikawa, 2005), yet the contribution of 3-D shape to memory color, and specifically in the context of natural objects, is still largely unknown. The only suggestive evidence comes from Olkkonen et al. (2008), who found that 3-D luminance shading cues affect the achromatic settings for 2-D photographs of natural familiar objects (Olkkonen et al., 2008). The use of real, solid objects is rare in memory color research. To our knowledge, this is the first study aiming to explore the separate contributions of 2-D and 3-D shape cues to memory color measurements using 3-D solid natural objects. 
To simplify the study, we limited the measurement of memory color to one dimension, the hue. We investigated how different visual attributes of familiar natural objects—intrinsic chromatic texture, 2-D contour, and 3-D surface shape—influence memory for their mean hue, in terms both of accuracy (absolute difference from the natural object hue) and precision (variation around the mean memory hue). We used a software-controlled 3-D environment in which we manipulated each identifying attribute independently of the others and observers were able, in real time and with free viewing, to adjust the overall chromatic texture of the object—a familiar fruit or vegetable – to match their respective memory colors. We aim to answer the following questions: (a) How and to what extent do surface polychromaticity, 2-D contour, and 3-D surface shape affect memory color (as classically defined)? (b) How and to what extent do these features affect memory color accuracy (as defined above)? (c) How and to what extent do these features affect memory color precision (as defined above)? (d) Do these attributes of object identity interact in influencing the observer's performance? We hypothesize that polychromaticity will improve memory color, in terms of both accuracy and precision, relative to uniformly colored stimuli, as will 3-D shape cues and shape diagnosticity relative to 2-D shape cues and shape genericity. We further hypothesize that the improvement due to polychromaticity will be more pronounced in the presence of diagnostic shapes, both 2-D and 3-D. 
Methods
Observers performed an adjustment task, manipulating the surface colors of an object (test object) via a joystick, until the surface appearance matched the observers' own memory color for a previously named fruit or vegetable (reference object). The surface appearance of the test object was either uniformly colored or naturally textured; all objects were either 2-D projected images or 3-D solid objects. The test object's shape was either the natural fruit or vegetable shape or a generic shape. All software for the experiment was written in Matlab 7.5 (The Mathworks Inc.). 
Observers
Twenty-eight observers (20 female and eight male, age 19–28 years, all students at Newcastle University) took part in all conditions of the experiment. All tested normal on the Farnsworth-Munsell 100 hue test (mean total error score 23.9, Kinnear & Sahraie, 2002; all error patterns consistent with normal trichromacy) and were naïve to the purpose of the experiment. The experimental procedure was approved by Newcastle University's Psychology Ethics Committee, and informed consent was obtained from each participant before the experiment. 
Apparatus
In order to analyze the interaction of shape and surface properties of real objects, we developed an experimental environment that allows the observer or experimenter to change the apparent surface color of individual 3-D objects in real time. The main idea consists of using a hidden data projector to display an image on a real solid object, which has been painted matte white, so that its surface color depends directly on the color projected. Figure 1 illustrates the experimental chamber. Observers view scenarios of 3-D objects displayed on the vertical backboard (main display) of the light-tight chamber. The illumination within the chamber is produced by two light sources: The light on the main display is generated by the data projector whereas the rest of the chamber is illuminated by the output of two sets of three fluorescent side lamps. The chamber design, as described in detail in the Supplemental materials: Experimental apparatus, prevents interactions between the light from the data projector and that from the side lamps, and each light source is controlled independently. The chamber illumination was set to be metameric to standard daylight illuminant at 6,500 K correlated color temperature (CIE standard D65). The CIE xyY values of the illumination on the chamber surfaces are given in Table 1. The projected image consists of a central object (the projected stencil) on a uniform neutral background, geometrically calibrated so that the outline of the projected stencil aligns perfectly with an individual solid object (3-D object) in a given scenario. The apparent surface color of the 3-D object is controlled on a trial-by-trial basis by altering the color of the projected stencil. For further details of the experimental apparatus, see Supplemental material: Experimental apparatus. The subject sat at a distance of 1200 mm from the main display (dimensions 610 mm × 810 mm) and viewed the object through a rectangular aperture in the chamber wall (400 mm × 250 mm). The data projector and side lamps were hidden from the observer's view. 
Figure 1
 
Schematic figure of the experimental apparatus (right side view); see Supplemental materials: Experimental apparatus for further details.
Figure 1
 
Schematic figure of the experimental apparatus (right side view); see Supplemental materials: Experimental apparatus for further details.
Table 1
 
CIE xyY chromaticity coordinates of the chamber walls, under D65-metameric illumination from the data projector and side lamps, as described in the text.
Table 1
 
CIE xyY chromaticity coordinates of the chamber walls, under D65-metameric illumination from the data projector and side lamps, as described in the text.
Illuminant D65 Y(cd/m2) x y
Main display 7.061 0.313 0.327
Walls 3.991 0.314 0.332
Color space
All test colors were defined using the cone contrast space described by Eskew, McLellan, and Guilianini (1999), labelled EMG space in this paper. This space was chosen because it is directly based on physiological properties of the visual system, specifically on the opponent modulation of the cone outputs (Smith & Pokorny, 1975) in relation to chromatic detection and discrimination. The origin of this color space is given by the cone excitations for a white sample in the scene (L0, M0, and S0). Cone-signal variations with respect to this origin, or adaptation point, are defined for each point in the scene as the ratios: ΔL = (L − L0)/L0, ΔM = (M − M0)/M0 and ΔS = (S − S0)/S0, where L, M, and S are the cone excitations of the specified point. The three axes of the color space are then defined as in the following formulas:    where LUM is the luminance axis, RG is the “red-green” axis, and BY is the “blue-yellow” axis (modified from Eskew et al., 1999). In this space, a point may be defined using the cylindrical coordinates radius, azimuth, and height (r, a, h), where the radius corresponds to saturation changes, the azimuth, i.e., the clockwise angle formed with the RG axis, to hue, and the height to luminance. 
Stimuli
Three natural familiar fruit and vegetable objects (reference objects—central column in Figure 2) were employed to generate the stimuli for this experiment: a Gala apple, a ripe banana, and a carrot. The general experimental scene consisted of a single test object placed against a uniform neutral background illuminated by a CIE D65 metamer. The possible shapes of the test object were: (a) a 2-D generic shape (2DG; a rhomboid; size 3.00° × 2.15°), (b) a 3-D generic shape (3DG; size 3.00° × 2.15° ), (c) the 2-D natural shape of one of the reference objects (2DN; for sizes see Table 2), or (4) the 3D natural shape of one of the reference objects (3DN; for sizes see Table 2). The 3-D test objects were artificial plaster replicas of the three reference objects (3DN) and one hand-made clay irregular rhomboid-based pyramid (3DG). These solid test objects were spray painted matte white and attached to the main display with small magnetic buttons glued to their undersides (Figure 3B). 
Figure 2
 
Photographs of the experimental objects and their chromaticity distributions. Top to bottom: Gala apple, banana, and carrot. Left column: chromaticity distributions of the experimental objects in EMG cone contrast space under D65 illumination. Each chromaticity is represented only once even if occurring more frequently in the distribution. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Black contour: the area containing 99% of the chromaticities of the object's chromatic distribution (OV). Black line: mean hue vector of the distribution (as described in text). Black star: mean chromaticity. Central column: digital photographs of natural objects used for chromatic texture extraction (see Texture extraction). Right column: digital photographs of gray-painted natural objects.
Figure 2
 
Photographs of the experimental objects and their chromaticity distributions. Top to bottom: Gala apple, banana, and carrot. Left column: chromaticity distributions of the experimental objects in EMG cone contrast space under D65 illumination. Each chromaticity is represented only once even if occurring more frequently in the distribution. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Black contour: the area containing 99% of the chromaticities of the object's chromatic distribution (OV). Black line: mean hue vector of the distribution (as described in text). Black star: mean chromaticity. Central column: digital photographs of natural objects used for chromatic texture extraction (see Texture extraction). Right column: digital photographs of gray-painted natural objects.
Figure 3
 
(A) Example of the center of a typical scene seen from the observer's point of view. (B) Solid test objects.
Figure 3
 
(A) Example of the center of a typical scene seen from the observer's point of view. (B) Solid test objects.
Table 2
 
Size of objects in centimeters squared (approximate) of surface and in degrees of visual angle as viewed in experimental chamber from the subject's point of view.
Table 2
 
Size of objects in centimeters squared (approximate) of surface and in degrees of visual angle as viewed in experimental chamber from the subject's point of view.
Size Apple Banana Carrot
Degrees 3.81 × 3.15 7.36 × 3.10 8.72 × 1.67
Centimeters squared 52 62 60
From digital photographs of each reference object, three distinct sets of chromatic surfaces were generated—a chromatic texture set and two uniform color sets. The digital camera calibration procedure and image processing software used for these steps is detailed in Supplemental materials: Image generation. To create the texture set, a texture image was extracted from the reference object's photograph and manipulated as described in Texture extraction and Texture manipulation. Subsequently, each output of the manipulation was morphed so that the final image consisted of a chromatic texture or uniform color region in the shape of the corresponding test object (e.g., banana on banana shape) or of the generic test object (e.g., banana on rhomboid shape). Therefore, when projected on the main display, the contour of the chromatic region overlapped perfectly with the visible contour of the solid test object from the observer's point of view (see details in Supplementary materials: Geometric calibration and morphing technique). Finally, the image was converted to the projector's RGB color space using the calibration model described in Supplemental materials: Spatial and spectral calibration and the mapping technique described in Supplemental materials: Gamut mapping technique
The observer's perception was therefore of a 2-D or 3-D shape with a changeable surface color. In the 2-D stimulus, the texture consists only of the intrinsic luminance and chromaticity variation of the original object surface, while the 3-D stimulus also contains luminance variations due to 3-D shading. A photograph of the observer's view of the banana object with banana texture is shown in Figure 3A. See Supplemental material: Geometric calibration and morphing technique for further detail on the geometric calibration and morphing technique. 
Texture extraction
Our primary aim in this step was to extract the veridical surface texture of the object due solely to intrinsic variations in surface reflectance, excluding variations in chromaticity or luminance caused by geometric factors such as illumination geometry, object 3-D shape, or scene configuration. We did so by subtracting the luminance shading (including specular highlights) from full-color photographs of natural objects (Figure 2, central column), by comparison with photographs of the same objects painted matte gray (Figure 2, right-hand column), taken under otherwise identical conditions with a color-calibrated digital camera. See Supplementary materials: Texture extraction for details and Figure 4 for an example of the results of the procedure on the Gala apple. Figure 2 (left-hand column) illustrates the chromaticity distributions in EMG cone contrast space for the three objects after highlight and shading removal. 
Figure 4
 
(A) Image of the Gala apple used in the experiments; (B) its extracted texture 2-D image without highlights and shading. Note that the images only approximately represent the actual colors due to their uncalibrated reproduction for this print.
Figure 4
 
(A) Image of the Gala apple used in the experiments; (B) its extracted texture 2-D image without highlights and shading. Note that the images only approximately represent the actual colors due to their uncalibrated reproduction for this print.
As an index of the overall chromatic variegation of the object's surface (OV), we calculated the area of the chromaticity distribution in EMG cone contrast space (in practice, this is the area containing 99% of the chromaticities, shown by the black contours in the left-hand column of Figure 2). Table 3 lists the OV indices for EMG space, the corresponding indices calculated in CIE LUV space, and the mean hue of the chromaticity distribution in EMG space (as defined above). 
Table 3
 
Mean hue of the object in EMG color space and object chromatic variegation (OV) in EMG color space and CIE LUV color space.
Table 3
 
Mean hue of the object in EMG color space and object chromatic variegation (OV) in EMG color space and CIE LUV color space.
Object Mean hue OV (EMG) OV (LUV)
Apple 70.12 2.32 × 10−3 3304.4
Banana 84.13 1.28 × 10−3 2823.3
Carrot 67.72 0.96 × 10−3 2256.8
Texture manipulation
As mentioned above, observers adjusted the surface color of a given object by pressing joystick buttons, which moved the displayed surface color distribution through a continuous set of test stimuli. For each reference object and shape combination, three sets of surface color distributions were generated based on the following initial chromatic surface types: (a) TEX, a fully textured surface, preserving the spatial structure of the original chromatic texture; (b) MM, a uniformly colored surface, with each pixel assigned the mean chromaticity of reference object distribution; and (c) MS, a uniformly colored surface, with each pixel assigned the most saturated value (largest radius) of reference object distribution. A set of TEX chromatic surfaces was created by solidly rotating the entire initial distribution of a reference object (shown in Figure 2) around the adaptation point, i.e., the origin of the EMG cone contrast space. This method maintained the relative distance between individual chromaticities in the distribution, while varying the hue of the distribution; thus the initial chromaticity value at each spatial location was replaced by its rotated value, while its initial luminance value was preserved. Specifically, for each chromaticity of the initial distribution with polar coordinates ci = (ri, Θi), and a given angle of rotation ΦN, the new chromaticity after rotation is given by:   
The rotation space was sampled at 1° intervals; hence, 360 images were generated per object/configuration combination (3 chromatic surfaces × 4 shapes = 12 combinations; 12,960 images). As an example, Figure 5A illustrates the chromaticity distribution of the original banana (0° rotation) and the same distribution rotated 180° in EMG cone contrast space. For symmetry, the rotation was grouped in positive (clockwise relative to the original hue angle; 1°–180°) and negative (counter-clockwise; −1° to −179°) rotations. The difference between two consecutive angles of rotation (i.e., 1°) in EMG is on average 1.38 ΔEuv with median 1.48 ΔEuv, well below the just noticeable difference threshold (Stokes, Fairchild, & Berns, 1992). 
Figure 5
 
(A) The banana's natural distribution (yellow-brown) and its 180° rotated distribution (blue-violet) in EMG cone contrast space. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Each color is represented only once even if it occurs more frequently in the distribution. Bottom left insert: original banana image. Top right insert: altered banana image. (B) General procedure: illustration of the image sequence in a single trial. Note that the colors here are for illustration purposes only and are not as displayed in the experiment.
Figure 5
 
(A) The banana's natural distribution (yellow-brown) and its 180° rotated distribution (blue-violet) in EMG cone contrast space. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Each color is represented only once even if it occurs more frequently in the distribution. Bottom left insert: original banana image. Top right insert: altered banana image. (B) General procedure: illustration of the image sequence in a single trial. Note that the colors here are for illustration purposes only and are not as displayed in the experiment.
Next, for each reference object and shape combination and each rotated distribution, we generated two uniformly colored surfaces: one (MM) with each pixel assigned the mean chromaticity of the corresponding rotated distribution and the other (MS) with each pixel assigned the most saturated chromaticity. Note that the luminance values of each pixel in the uniform stimuli were set to the mean luminance of the object's initial surface color distribution (when viewed in the 3-D condition, all object surfaces had luminance variations due to 3-D shading). All test stimuli were generated offline before the experiment using Matlab 7.5 (MathWorks Inc.). 
For the generic shapes (2-D or 3-D), the initial chromaticity distribution of the TEX surfaces was a large (rhomboid-shaped) section of the natural surface of the original reference object. Although this distribution was, in theory, not identical to that of the original object, in practice, it differed on average by less than 12% (in number of discriminable chromaticities). 
Experimental procedure
The observer's task on each trial was to adjust the color of the test object until it appeared natural or “typical” for a previously named fruit or vegetable (Gala apple, banana, carrot), choosing between the corresponding continuous set of chromatic surfaces. Each observer completed 12 blocks of trials (four shape configurations—2DG, 3DG, 2DN, and 3DN—for each of three objects separately) spread over several approximately 1-hr sessions, each block lasting 15–25 min. In each block, texture (TEX) and uniform chromatic surface conditions were alternated, counterbalancing mean (MM) and most saturated (MS) conditions, such that in total there were 10 MM trials, 10 MS trials, and 20 TEX trials. Prior to the first session, the observers were required to get acquainted outside the laboratory (e.g., in grocery stores) with a list of fruits and vegetables including Gala apples, bananas, and carrots. In addition, before starting the main experiment, observers acquired familiarity with the task during a 3-min practice session. 
At the start of each session, the observer stabilized his/her head on the chin rest, looking inside the experimental chamber. The side lights inside the chamber were switched on and set at the illumination chromaticity values; the experimental room was otherwise in complete darkness. At the beginning of each block, the name of one of the reference objects was presented in large black text on white background for 5 s, then removed and followed by a 60-s display of uniform neutral background illuminated by a CIE D65 metamer (the “adaptation phase”). Then the first stimulus was presented; the starting point for the adjustment was chosen at random from the set of images generated for one of the three chromatic surface conditions, counterbalanced over all observers/sessions. The observer changed the color appearance of the object until s/he judged it to be “natural,” using two joystick buttons, one for each rotation direction, counterbalanced between observers. The participant could use either button to complete an entire 359° rotation and smoothly continue in the same direction through 0° and beyond; both rotation directions could be used in one trial as often as needed. Observers were asked to complete the task as quickly as possible but no time constraint was imposed. After making a decision, the observer pressed a third button and the selection was recorded, as well as the time required for the adjustment and the starting settings. A fixation cross was displayed for 0.5 sec and the next trial initiated. The general protocol is illustrated in Figure 5B
To each observer was assigned one of the six possible object sequences (e.g., carrot-apple-banana). For all observers, the first shape presented was always the generic shape for all reference objects followed by, for 50% of the participants, the 2-D shape and then the 3-D shape, while for the other 50% of the participants the reverse. This order ensures a gradual introduction of more identity cues from the initial linguistic information only (name) to the final familiar shape information, while also avoiding possible biases due to always presenting the 3-D shape information last. 
Data analysis
For each natural object, three experimental factors were manipulated: (a) chromatic surface factor—the extent of variegation in its surface chromaticity, i.e., textured or uniform; (b) shape diagnosticity factor—the presence or absence of the object's natural contour shape, i.e., object's silhouette or generic rhomboid; and (c) shape dimensionality factor, i.e., 3-D or 2-D. Any analysis was computed after discarding the outliers for each observer, defined as points falling more than 1.5 times the interquartile range above the third or below the first quartile. For each shape configuration, only 10 of the 20 texture trials were randomly selected to have the same number of trials as the uniform mean and uniform most saturated cases. Data were analyzed in terms of three performance measures: memory hue deviation, memory hue accuracy, and memory hue precision. 
The memory hue deviation is defined as the angular difference Δαe between the mean (over 10 trials) of each observer's settings and the original hue angle of the object. The mean deviation across all observers (N = 28) averages together both negative and positive deviations (counter-clockwise and clockwise rotations, respectively), and therefore may falsely imply no error if equal numbers of observers choose two angles of equal amplitude but opposite direction. We therefore compute memory hue accuracy (MHA) as a percentage accuracy using the average of the absolute values of the observers' actual angular errors as follows:  where K is respectively the number of trials per condition (K = 10), and Δαik is the difference between the angle selected in the kth trial and the object's typical hue angle by observer i in Condition X. The average across all observers is defined as the mean memory hue accuracy and ranges from 0% (polar opposite of the natural distribution) to 100% (same distribution as the natural). 
Although two conditions may have the same mean or accuracy, the range of angles selected might be very dissimilar. Because the smaller the range, the higher the observer's precision in selecting points around the mean, we define memory hue precision (MHP) as:  where maxkαik) and minkαik) are the maximum and minimum values of Δαik over all trials for Condition X, excluding outliers. The average across all observers is defined as the mean memory hue precision and ranges from 0% (selected distributions at 90° apart) to 100% (selected distributions all the same). 
In addition, we compute the within-factor percentage difference between Condition A versus B for memory hue deviations, accuracies, and precisions as follows:  where μA and μB are the means of the respective performance indicators over all trials belonging to two generic Conditions A and B of the same factor, over all observers (N = 28). 
Four-way repeated measures analysis of variance (ANOVA) was performed to test the significance of between-factor differences in memory hue deviation, absolute angular error (and accuracy), and mean angular range (and precision), and contrast tests were used to compare within-factor conditions. Three-way repeated measures ANOVA was used when further analysis was necessary. When Mauchly's sphericity was violated, the ANOVA was corrected using Greenhouse-Geisser estimates. A significance level of p = 0.05 was imposed. 
Results
Memory hue deviation
Observers were imperfect—compared to the average natural object colors—in their memory colors for familiar objects, as indicated by the memory hue deviation values shown in Table 4 and Figure 6. The mean memory hue deviations, averaged over all observers, were negative for each condition relative to the natural object hue. Thus, on average, given that the natural hue angles for all objects were located in the fourth (“red”-”yellow”) quadrant of EMG space, observers adjusted hues to be slightly “redder” and “bluer” than natural. However both chromatic texture and three dimensionality significantly improved the deviation in memory hue relative to other conditions, as illustrated in Figure 6 and Table 5
Figure 6
 
Memory hue deviation: comparison of conditions within each experimental factor. Small grey circles indicate individual observer selections averaged over 10 identical trials, i.e., the mean selection of one observer for one condition, M = 28 × (4 × 3 − 1). Large filled symbols are the average over all 28 observers for each object/condition. (A) Chromatic surface factor (TEX vs. Uniform). Red squares: mean uniform condition (M = 4 × 3). Blue triangles: most saturated uniform condition (M = 4 × 3). (B) Shape diagnosticity factor (natural vs. generic). Each red square is the mean memory hue deviation for one object/chromatic surface/dimensionality condition combination (M = 3 × 3 × 2). (C) Shape dimensionality factor (3-D vs. 2-D). Each red square is the mean memory hue deviation for one object/chromatic surface/shape-diagnosticity condition combination (M = 3 × 3 × 2). Black line: unity line.
Figure 6
 
Memory hue deviation: comparison of conditions within each experimental factor. Small grey circles indicate individual observer selections averaged over 10 identical trials, i.e., the mean selection of one observer for one condition, M = 28 × (4 × 3 − 1). Large filled symbols are the average over all 28 observers for each object/condition. (A) Chromatic surface factor (TEX vs. Uniform). Red squares: mean uniform condition (M = 4 × 3). Blue triangles: most saturated uniform condition (M = 4 × 3). (B) Shape diagnosticity factor (natural vs. generic). Each red square is the mean memory hue deviation for one object/chromatic surface/dimensionality condition combination (M = 3 × 3 × 2). (C) Shape dimensionality factor (3-D vs. 2-D). Each red square is the mean memory hue deviation for one object/chromatic surface/shape-diagnosticity condition combination (M = 3 × 3 × 2). Black line: unity line.
Table 4
 
Mean memory hue deviations (degrees) for chromatic surface and shape conditions. Right-most column gives the difference in deviation between 2-D and 3-D shape conditions.
Table 4
 
Mean memory hue deviations (degrees) for chromatic surface and shape conditions. Right-most column gives the difference in deviation between 2-D and 3-D shape conditions.
Condition 3-D 2-D Mean 2-D − 3-D
TEX −1.96 −3.48 −2.72 1.52
MM −2.18 −7.45 −4.82 5.27
MS −3.13 −7.40 −5.26 4.27
Mean −2.42 −6.11
Natural −2.78 −5.51 −4.14
Generic −2.06 −6.71 −4.38
Table 5
 
Four-way ANOVA results for mean memory hue deviation. Significant effects indicated in bold.
Table 5
 
Four-way ANOVA results for mean memory hue deviation. Significant effects indicated in bold.
Source Ds, ds-error F P
Main effect
 Chromatic surface (TEX/Uniform) 1.2, 33 3.5 0.06
 Shape diagnosticity (Nat/Gen) 1.0, 27 0.091 0.76
Shape dimensionality (2-D/3-D) 1.0, 27 23 <0.01
Objects (Objs) 1.4, 38 6.8 <0.01
Main contrast
 TEX vs. MM 1.6, 42 2.5 0.15
TEX vs. MS 1.0, 27 6.2 <0.02
Main interactions
 TEX/Uniform and Nat/Gen 1.6, 42 2.0 0.10
TEX/Uniform and 2-D/3-D 1.7, 46 11 <0.01
TEX/Uniform and Objs 2.5, 70 16 <0.01
Interaction contrast
TEX/MM and 2-D/3-D 1.0, 27 34 <0.01
TEX/MS and 2-D/3-D 1.0, 27 8.7 <0.02
Taking the effect of the chromatic surface factor first (Figure 6A): t tests showed that the memory hue deviations are significantly different from zero (original hue) for all chromatic conditions, texture (TEX), uniform mean (MM), and uniform most saturated (MS) [tTEX(27) = −2.42, p < 0.05, tMM(27) = −2.67, p < 0.05, tMS(27) = −3.34, p < 0.05]. Although overall the ANOVA showed no significant main effect of chromatic surface factor on memory hue deviation (see Table 5), the texture conditions tend to have smaller deviations than the uniform conditions. This difference is significant for the uniform most saturated condition (Table 5). In short, chromatic surface texture tends to improve the mean memory hue deviation, and significantly so for one condition. 
With respect to shape diagnosticity, again the mean hue deviations are negative and significantly different from zero for both natural and generic shapes [tNAT(27) = −2.97, p < 0.05, tGEN(27) = −2.87, p < 0.05; Figure 6B and Table 4]. There is no evidence for improvement in memory hue deviation for natural shapes and no interaction between shape diagnosticity and chromatic surface factors. 
Lastly, shape dimensionality has a significant effect on memory hue deviation (Figure 6C and Table 5). On average, the mean memory hue selections for the 3-D condition are 1.5 times closer than the 2-D condition to the natural hue angle, and both are significantly different from zero [t3D(27) = −1.69, p < 0.05, t2D(27) = −4.13 , p < 0.05]. 
There is a significant interaction between the chromatic surface factor and shape and dimensionality, as shown in Table 5. Interaction contrasts reveal that the memory hue deviations are significantly closer to the original hue angle for the textured 3-D and 2-D conditions, in comparison with the respective 3-D and 2-D conditions in each uniform surface condition. Furthermore, the difference between TEX-3-D and TEX-2-D is smaller than for the MM or MS counterparts. The improvement from chromatic texture thus limits further improvement from the increase in shape dimensionality. In short, three-dimensionality significantly improves mean memory hue relative to two-dimensionality and the improvement is greatest for the uniform surface conditions. 
Memory hue accuracy
Despite the deviations in memory hue described above, the overall accuracy of observers' memory hue is high, based on the absolute angular error of the selected chromaticity distributions relative to the natural distributions. For all conditions, the mean accuracy is above 90% (see Table 6 and Figure 7). Yet accuracy is significantly improved by surface texture, natural shape contour, and three-dimensionality (Table 7). 
Figure 7
 
Left column: Mean memory hue accuracy. Right column: Mean memory hue precision. Condition comparisons within each experimental factor. (A), (D) Chromatic surface factor. Blue filled triangle: texture condition compared with mean uniform condition. Red filled squares: texture condition compared with most saturated uniform condition. Each point is the average across 28 subjects for each object/shape combination. (B), (E) Shape diagnosticity factor. Each point represents the mean memory color accuracy for one object/shape dimensionality/chromatic surface condition combination averaged across 28 subjects. (C), (F) Shape dimensionality factor split by shape diagnosticity. Each point represents one object/chromatic surface condition combination averaged across 28 subjects. Red filled squares: Natural contour. Blue filled triangles: Generic contour. Error bars denote one standard error of the mean (SEM). Black line: unity line.
Figure 7
 
Left column: Mean memory hue accuracy. Right column: Mean memory hue precision. Condition comparisons within each experimental factor. (A), (D) Chromatic surface factor. Blue filled triangle: texture condition compared with mean uniform condition. Red filled squares: texture condition compared with most saturated uniform condition. Each point is the average across 28 subjects for each object/shape combination. (B), (E) Shape diagnosticity factor. Each point represents the mean memory color accuracy for one object/shape dimensionality/chromatic surface condition combination averaged across 28 subjects. (C), (F) Shape dimensionality factor split by shape diagnosticity. Each point represents one object/chromatic surface condition combination averaged across 28 subjects. Red filled squares: Natural contour. Blue filled triangles: Generic contour. Error bars denote one standard error of the mean (SEM). Black line: unity line.
Table 6
 
Absolute angular error (degrees) split by factor conditions. Totals also expressed as mean memory hue accuracy (i.e., the percentage in brackets). Note that the smaller the error higher the accuracy.
Table 6
 
Absolute angular error (degrees) split by factor conditions. Totals also expressed as mean memory hue accuracy (i.e., the percentage in brackets). Note that the smaller the error higher the accuracy.
Condition 3-D 2-D 2-D − 3-D Nat Gen Mean
TEX 9.91 10.6 8.91 11.6 10.2 (94.6%)
MM 12.2 14.7 13.0 13.9 13.5 (92.5%)
MS 10.9 12.6 11.1 12.4 11.8 (93.5%)
Nat 10.1 11.9 1.80 11.0 (93.9%)
Gen 11.9 13.3 1.40 12.6 (92.9%)
Mean 11.0 (93.9%) 12.6 (92.9%)
Table 7
 
Four-way ANOVA results for memory hue accuracy. Significant effects indicated in bold.
Table 7
 
Four-way ANOVA results for memory hue accuracy. Significant effects indicated in bold.
Source Ds, ds-error F P
Main effect
Chromatic surface (TEX/Uniform) 1.7, 46 6.9 <0.01
Shape diagnosticity (Nat/Gen) 1.0, 27 6.7 <0.02
Shape dimensionality (2-D/3-D) 1.0, 27 6.1 <0.02
Objects (Objs) 1.4, 38 12 <0.01
Main contrast
 TEX vs. MM 1.0, 27 11 <0.02
 TEX vs. MS 1.0, 27 2.7 0.12
Main interactions
 TEX/Uniform and Nat/Gen 1.8, 48 2.5 0.10
 TEX/Uniform and 2-D/3-D 1.7, 46 2.3 0.12
TEX/Uniform and Objs 2.5, 70 16 <0.01
 2-D/3-D and Objs 1.9, 51 2.8 0.07
 NAT/GEN and Objs 1.6, 42 1.7 0.20
 NAT/GEN and 2-D/3-D 1.0, 27 0.09 0.77
Interaction contrast
TEX/MM and 2-D/3-D 1.0, 27 7.5 <0.02
 TEX/MS and 2-D/3-D 1.0, 27 1.2 0.27
TEX/MM and NAT/GEN 1.0, 27 5.3 <0.05
 TEX/MS and NAT/GEN 1.0, 27 1.9 0.18
Specifically, memory hue accuracy is significantly higher for the texture condition than the mean uniform (24% difference) and most saturated (13% difference) conditions (Figure 7A; Tables 6 and 7). Accuracy is 12.8% higher for the natural 2-D contour compared with the generic contour condition; a significant difference (Figure 7B; Tables 6 and 7). Shape dimensionality again has a significant effect, with observers being 12.7% more accurate for 3-D than 2-D shapes (Figure 7C; Tables 6 and 7). There is also a trend for this effect to be stronger for natural shapes than generic; the difference is almost one third the size of the effect (Δ3D-2D(Nat) = 17.6%, Δ3D-2D(Gen) = 12.0%; see Tables 6 and 7). The difference is significant for the banana (two-way ANOVA, F(1, 27) = 7.1, p < 0.02), but, we suggest, masked in the apple and the carrot. The reasons for this object-dependency will be discussed further in the Effect of dimensionality section. 
Overall there were no interactions for memory hue accuracy between the factors of chromatic surface and either shape dimensionality or shape diagnosticity (Table 7), indicating that the effect of chromatic texture on mean memory hue accuracy is independent of the shape in which it is presented. Although interaction contrasts show significant differences in the effects of dimensionality and diagnosticity between textured and mean uniform conditions, these are in opposite directions (Table 6). In summary, textured, naturally shaped, or 3-D surfaces induce higher memory hue accuracy than, respectively, uniformly colored, generically shaped, or 2-D surfaces, with performance being best for textured 3-D natural shapes (μTEX–NAT-3D = 95.3%). The improvement due to three-dimensionality tends to be stronger for natural than generic shapes. 
Memory hue precision
The range of selected hue angles—and hence, the precision—varies significantly with shape diagnosticity and dimensionality but not with chromatic surface factor (Figure 7, right column). 
Specifically, for the chromatic surface factor, there is a trend for precision to be higher in the TEX condition relative to MS (4.70% higher) and MM (0.159% higher) conditions (Figure 7D; Tables 8 and 9). For shape diagnosticity, memory hue precision is significantly higher for the natural compared with the generic condition (by 11.6%) (Figure 7E; Tables 8 and 9). For shape dimensionality, precision is on average higher for the 2-D versus 3-D condition (by 9.30%) (Figure 7F), but this difference is significant only for generic shapes (Figure 7F – blue triangles; Δ% = −11.3%), not for natural shapes (Figure 7F – red squares; Δ% = −5.40%), as shown by split ANOVA. 
Table 8
 
Range (degrees) split by factor conditions. Totals also expressed as mean memory hue precision (i.e., the percentage in brackets). Note that the smaller the range higher the precision.
Table 8
 
Range (degrees) split by factor conditions. Totals also expressed as mean memory hue precision (i.e., the percentage in brackets). Note that the smaller the range higher the precision.
Condition 3-D 2-D 2-D − 3-D Nat Gen Mean
TEX 22.6 22.2 21.2 23.7 22.4 (87.5%)
MM 24.2 20.7 21.5 23.4 22.5 (87.6%)
MS 24.7 22.3 21.5 25.5 23.5 (86.9%)
Nat 22.0 20.8 −1.20 21.4 (88.1%)
Gen 25.6 22.8 −2.80 24.2 (86.6%)
Mean 23.8 (86.8%) 21.8 (87.9%)
Table 9
 
ANOVA results for precision (range). Significant effects indicated in bold.
Table 9
 
ANOVA results for precision (range). Significant effects indicated in bold.
Source Ds, ds-error F P
Four-way ANOVA Main effect
 Chromatic surface (TEX/Uniform) 1.7, 46 0.002 0.58
Shape diagnosticity (Nat/Gen) 1.0, 27 11 <0.01
 Shape dimensionality (2-D/3-D) 1.0, 27 3.7 0.06
Objects (Objs) 1.6, 43 19 <0.01
Main contrast
 TEX-MM 1.0, 27 0.002 0.96
 TEX-MS 1.0, 27 0.63 0.43
Main interactions
 TEX/Uniform and Nat/Gen 1.8, 48 0.79 0.45
TEX/Uniform) and 2-D/3-D 1.9, 50 3.3 <0.05
TEX/Uniform and Objs 3.3, 90 6.2 <0.01
2-D/3-D and Objs 1.8, 49 4.1 <0.05
 NAT/GEN and Objs 1.8, 50 0.85 0.42
TEX/Unif, Nat/Gen, Objs 24, 100 3.97 <0.02
THREE-WAY ANOVA
 Natural – 2-D/3-D 1.0, 27 0.71 0.41
Generic – 2-D/3-D 1.0, 27 7.5 <0.02
 TEX – 2-D/3-D 1.0, 27 0.11 0.75
MM – 2-D/3-D 1.0, 27 8.1 <0.05
MS – 2-D/3-D 1.0, 27 3.1 <0.05
Interactions were found for mean memory hue precision between chromatic surface factor and shape dimensionality (Table 9), indicating that the size of the effect of dimensionality depends on surface texture (and vice versa). Specifically, 3-D shape does not significantly decrease subject precision for textured surfaces as for MM and MS (three-way ANOVA, Table 9). For clarity, Figure 8 illustrates the overall effect of shape on mean memory hue accuracy (Panel A) and mean memory hue precision (Panel B). In summary, textured, 3-D, naturally shaped surfaces induce higher mean memory hue precision than uniformly colored, 2-D, generically shaped surfaces (μTEX-NAT-3D = 89.6%, μMM-GEN-2D = 88.1%, and μMS-GEN-2D = 86.6%). 
Figure 8
 
Relationship between stimulus shape and (A) mean memory hue accuracy (%) and (B) mean memory hue precision (%), averaged across chromatic conditions and objects for all subjects. Error bars: one standard error of the mean. The trend shown in Panel A appeared similar between objects (Table 10).
Figure 8
 
Relationship between stimulus shape and (A) mean memory hue accuracy (%) and (B) mean memory hue precision (%), averaged across chromatic conditions and objects for all subjects. Error bars: one standard error of the mean. The trend shown in Panel A appeared similar between objects (Table 10).
Table 10
 
ANOVA analysis for object type for accuracy and precision. Only significant value listed.
Table 10
 
ANOVA analysis for object type for accuracy and precision. Only significant value listed.
Ds, ds-error F P
Source - accuracy
 Apple vs. Banana 1.0, 27 23 <0.02
 Apple vs. Carrot 1.0, 27 6.3 <0.02
 Banana vs. Carrot 1.0, 27 8.3 <0.02
Source - precision
 Apple vs. Banana 1.0, 27 0.82 0.38
 Apple vs. Carrot 1.0, 27 25 <0.01
 Banana vs. Carrot 1.0, 27 22 <0.01
Interactions with object type
ANOVA revealed a significant main effect of object type on both mean memory hue accuracy (Table 7) and mean memory hue precision (Table 9). Memory hue accuracy was highest for the banana and lowest for the apple (μApple = 91.3%, μBanana = 95.6%, and μCarrot = 94.8%; Table 10). As discussed below (Effect of object category), this significant difference in accuracy between objects might be due to the difference in their surface textures, as the only significant interaction found is between object type and chromatic surface factor (TEX vs. Uniform, Table 7). Mean hue precision was highest for the carrot and lowest for the banana; there was no significant difference between apple and banana (μApple = 87.4%, μBanana = 84.1%, and μCarrot = 90.6%; Table 10). Furthermore, the size of the effect of object type on precision is affected by dimensionality (and vice versa) and chromatic surface factor (Table 9). In the Effect of object category section we discuss the significance of these results. 
Discussion
Our primary goal was to compare the memory color of familiar natural objects in different states of naturalness: with and without natural shape contour, natural chromatic texture, or three-dimensionality, and all combinations of these cues. The main result reported here is that surface polychromaticity, shape diagnosticity, and shape dimensionality all influence both the accuracy and precision of the memory hue of the familiar object. Each distinct contributor to object identity and its effect on memory color are described in the following sections. Note that previous studies have examined the effect of memory on color appearance using simple flat homogeneously colored patches (e.g., Jin & Shevell, 1996) or 2-D textures (e.g., Olkkonen et al., 2008), while this study is, to date, the first that examines the effects of 3-D solid shape cues and chromatic texture, separately and combined, on memory color. Furthermore, this study introduces a robust method to eliminate shading information from 2-D stimuli and control separately for the effects of heterogeneous surface chromaticity, shape diagnosticity, and shape dimensionality. 
Effect of chromatic surface representation
We explored the effect of different representations of the chromatic properties of the familiar object's surface, namely: (a) full chromatic texture (TEX); (b) mean color only (mean uniform hue and luminance, MM); and (c) most saturated color only (most saturated hue and original mean luminance, MS). The results showed no significant effect of chromatic condition on the mean value of memory hue but this is mainly because the negative and positive deviations from the original hue angle were averaged, with loss of the absolute difference from the typical angle. For a more meaningful evaluation of the effects, we therefore calculated the memory hue accuracy relative to the object's original hue angle. Observers were significantly more accurate for chromatically textured surfaces than for uniform surfaces. Furthermore, observers were slightly more precise for textured surfaces than uniform surfaces, although not significantly. The presence of chromatic texture also alters the effects of three-dimensionality on hue memory precision: the precision for textured surfaces does not decrease for 3-D shapes as it does for uniform surfaces (see the Effect of dimensionality section). Clearly then the observer's memory color for objects is enhanced by the presence of natural texture. This conclusion raises a further question, though: Does this effect depend simply on the multiplicity of colors forming the texture or on the spatial structure of the texture and its recognisability? Our results here cannot distinguish between these possibilities, but a further study addresses the issue (Vurro & Hurlbert, 2011). 
In conclusion, since natural objects do not possess homogeneously colored surfaces, but, on the contrary, are widely variegated both in chromaticity and luminance, this study indicates that a simple match to a uniform surface is not sufficient to evaluate the real character of object memory colors. Therefore future studies on memory color should take into account the complexity of the chromatic stimuli. 
Effect of shape diagnosticity
This study controls shape independently of surface attributes and therefore differs from other studies with similar aims. For example, previous studies (examining either chromatic discrimination or memory color) (Hansen, Giesel, & Gegenfurtner, 2008; Hansen et al., 2006; Olkkonen et al., 2008) varied the shape of the chromatic stimuli while simultaneously altering the texture, using random noise, pink noise, or brown noise texture for the generic control shape and natural texture for the natural shape. The effects of texture diagnosticity were thus confounded with the effects of shape diagnosticity. In addition, the textured stimuli were presented on a 2-D display with the luminance shading from the original image providing the 3-D cues. Here, we aim to evaluate the effect on memory color of different shape cues, i.e., generic/natural shape contour and 2-D/3-D geometry, while maintaining the same chromatic conditions. We are therefore able to treat shape diagnosticity as an independent factor. 
Observers showed better memory hue accuracy when the color stimuli were presented with their natural contour shapes than with generic shapes, i.e., observers selected hue angles closer to the object's typical hue angle. Furthermore, the precision of memory hue was higher for the natural contour shapes than for the generic, indicating that the observers are more certain of the object's typical hue angle (i.e., its color or color distribution). 
It might be argued that small differences in the chromatic distribution of the projected textures for the natural versus generic shapes, not the differences in shape diagnosticity, affected the mean memory hue accuracy and precision. This explanation is ruled out, though, because there was no overall interaction between chromatic surface condition and shape diagnosticity, i.e., the improvement due to texture, in comparison with uniform surface conditions, is not overall significantly different for generic and natural shapes. This conclusion is in general agreement with the findings that chromatic discrimination thresholds for chromatic textures are similar for natural object contours and generic rectangular patches (Hansen et al., 2008). Contrary to previous studies (Olkkonen et al., 2008), we find that natural shape contours thus improve memory color accuracy and precision, but we confirm our previous report of no significant effect on mean memory color (Ling et al., in press). 
In conclusion, this study sheds new light on how shape modulates memory color, demonstrating that diagnostic shape consistently increases both the accuracy and precision of memory color. 
Effect of dimensionality
The experiments reported here are the first (to our knowledge) to examine the effect on color appearance of chromatic variegation of natural object surfaces in a solid three-dimensional environment. Observers thus experience more natural object perception than in experiments using 2-D displays while also able to interact with the surface color in real time. 
The first key finding is that 3-D stimuli elicit a more accurate memory of the natural hue of the object. Since no interaction was found between the chromatic surface and shape dimensionality factors, this conclusion is valid for uniform as well as textured surfaces. Secondly, this effect tends to be stronger for natural shapes compared to generic (Figure 8), suggesting that the improvement may be at the level of object representations rather than lower-level chromatic discrimination. The fact that this difference is significant only for the banana has two possible explanations: (a) Accuracy has reached ceiling for certain objects; given the complexity of the surface color variegations, for certain objects such as the apple or the carrot, the introduction of more information from other cues cannot improve accuracy further; or (b) The contribution of 3-D shape cues to object identity for the apple or carrot is small compared to the banana (or compared to the contribution from shape contour alone). Although studies of object recognition have revealed only small improvements in the recognisability of 3-D shaded images over silhouettes, and only under particular conditions, other studies show that differences in recognition performance between silhouette and unfilled outlines depend significantly on the type of object and its particular depiction (Hayward, 1998; Hayward, Wong, & Spehar, 2005; Newell & Findlay, 1997; Wagemans et al., 2008). We therefore examined the recognisability of our specific stimuli in a control questionnaire. Observers' object identification performance was on average 30% higher for images of our 3-D solid objects (fully shaded, uniform, neutral surfaces) compared with images of their 2-D silhouettes (see Supplemental material: Control experiment for details). Furthermore, subjects perform strikingly better for the 3-D banana compared to the other two objects, reaching almost 100% recognisability. The banana's shape is therefore distinctive compared to the other two objects; its contribution to object identity is especially strong in 3-D. The contributions to identity of 3-D shape for the apple and carrot might be too small to be measured in our experiment. Further experiments are necessary to investigate this object-related difference in a larger number of objects. Note that, in addition to 3-D shading, the depth cues of binocular disparity and motion parallax arising from the use of solid objects and permitting of small head movements may have contributed to a “real-object advantage” in recognition (Chainay & Humphreys, 2001; Hiraoka, Suzuki, Hirayama, & Mori, 2009) . The results thus suggest that memory hue accuracy for natural shaped objects is enhanced for three-dimensional solids because they portray object identity more effectively than 2-D silhouettes and thereby activate a more accurate representation of the object and its color. 
On the other hand, the fact that accuracy also improves for 3-D versus 2-D generic shapes, which are equally nondiagnostic of object identity, suggests that there may also be a low-level component to the improvement. This possibility must be considered, though, in the context of the results for mean memory hue precision. Memory hue precision is significantly lower for 3-D versus 2-D for the generic shapes only, with no significant difference for natural shapes. Similarly, precision is lower for 3-D versus 2-D for uniform surfaces but not significantly different for naturally textured surfaces. This result implies that three-dimensionality introduces noise in the observers' perception or performance, which is overcome by a specific object identification signal in the presence of natural cues. The noise may be sited at the level of chromatic discrimination, but although there is some evidence that chromatic noise masks luminance-defined targets (Kingdom, Bell, Gheorghiu, & Malkoc, 2010; Lucassen, Bijl, & Roelofsen, 2008) there is none to suggest that luminance shading should interfere with discrimination of chromatic textures. The effect of three-dimensionality of generic shapes on accuracy and precision in this task may therefore instead occur at a higher level; for example, three-dimensionality may of itself enhance access to natural object representations, but the lack of a specific 3-D exemplar may allow a multiplicity of representations that impede precision. 
In conclusion, the introduction of three-dimensional cues: (a) improves the accuracy of memory color for both natural and generic shaped objects; (b) increases the size of the accuracy effect for natural shaped objects via enhanced representation of object identity; and (c) reduces the precision of memory color when no specific natural information is provided. 
Effect of object category
The accuracy and precision of mean memory hue, as well as mean memory hue itself, vary significantly across objects, prompting the question: “What differentiates these objects in the context of color memory”? Given the inherent variation in chromatic discriminability across color space, the differences between objects might be explained largely by the differences in their mean hues. Although the mean hue of the surface chromaticity gamut does differ between objects, there is only very low correlation between accuracy and mean hue angle [Spearman's correlation, r(36) = 0.1, p = 0.57] as Figure 9A illustrates. In fact, two objects (apple and carrot) have similar hue angles (70 vs. 68) but very different accuracies, about 90% versus 94%; F(2, 54) = 12.5, p < 0.0005. On the other hand, precision decreases with increasing hue angle, r(36) = −0.84, p < 0.00001, over all subjects and objects (Figure 9C). 
Figure 9
 
First row: Average of the mean memory hue accuracy across all conditions for the three objects as a function of mean hue angle in EMG color space (A) and object color variegation (in LUV space) (B). Second row: Average of the mean memory hue precision across all conditions for the three objects as a function of mean hue angle in EMG color space (C) and object color variegation (OV) (in LUV space) (D).
Figure 9
 
First row: Average of the mean memory hue accuracy across all conditions for the three objects as a function of mean hue angle in EMG color space (A) and object color variegation (in LUV space) (B). Second row: Average of the mean memory hue precision across all conditions for the three objects as a function of mean hue angle in EMG color space (C) and object color variegation (OV) (in LUV space) (D).
A second possible candidate for the difference in mean memory color accuracy between objects might be the extent of the surface chromaticity gamut. Table 3 shows, for example, that the carrot has a much smaller range of chromaticities (OV) compared to the apple. These distributions are not just specific to the exemplars photographed but are representative of the object category, as we have shown in previous work (Ling et al., 2008; Vurro et al., 2009). A statistical analysis of surface chromaticity gamuts (Vurro, 2011) demonstrates that the natural gamuts of two objects are different if the objects belong to distinct object categories, but similar if they are exemplars of the same category (e.g., two bananas under the same illumination). Based on this observation, we hypothesize that, for each object category, we have stored in memory one exemplar of the surface chromaticity distribution, and thus, memory color is a distribution rather than a single point. The chromaticity gamuts of the objects sampled here are thus, we argue, both representative of other examplars in the category and comparable to the stored memory color for that category. 
Figure 9 shows the overall mean memory color accuracy (B) and precision (D) of the three objects as a function of chromatic variegation (OV). Objects with larger OV have in general lower accuracy and precision, although the correlation is moderate [accuracy correlation: r(32) = −0.44, p < 0.01; precision correlation: r(32) = −0.41, p < 0.01]. This result might indicate a greater variability in memory color for objects with greater chromatic variegation, and hence lower precision and accuracy—relative to the average typical hue—in mean memory color matches. Nevertheless, precision appears affected also by the mean hue angle. There may also be a decrement in performance due to the increase in complexity of the match. Moreover, spatial characteristics of the chromatic texture (not examined in this study) may affect memory color. Further studies on a larger number of natural objects are necessary to determine the key factors. 
Interaction of shape and surface properties in object representation: Possible neural sites
Our findings suggest that activation of the memory color of familiar objects is enhanced by simultaneous visualization of 3-D shape and surface texture properties, where these are also diagnostic of object identity. This result prompts the question of where these diagnostic attributes—shape, texture, and color—are integrated in the neural representation of the object. 
Recent work suggests that surface attributes are analyzed independently from geometric shape properties in distinct visual pathways, the lateral (LOC) and medial (CoS) areas of occipital cortex, respectively (Cant, Arnott, & Goodale, 2009; Cant & Goodale, 2007; James, Culham, Humphrey, Milner, & Goodale, 2003). Further, there is evidence that the distinct surface attributes of texture and color are also analyzed separately in medial occipital cortex, with color foci located in anterior CoS and lingual gyrus, and texture foci in posterior CoS (Cavina-Pratesi, Kentridge, Heywood, & Milner, 2010; James et al., 2003; Steeves et al., 2004). The evidence suggests that LOC is a higher level projection site of low-level shape contour information contributing to object recognition, whereas the CoS and lingual gyrus are the sites for object recognition based on surface properties. The combining of shape and surface information into an integrated object representation is likely to occur outside of these functionally selective foci, and there is evidence that areas in the fusiform gyrus are activated by combinations of color, shape, and surface texture and that these areas overlap with areas selective for complex stimuli such as faces and scenes (Cant & Goodale, 2007; Cavina-Pratesi et al., 2010). Whether these are the definitive areas in the integrative processing necessary for object recognition based on multiple features is unknown. The stimuli used in the latter studies to probe the selectivity for surface properties are also unfamiliar and man-made and although in some cases (Cant & Goodale, 2007) the surface textures were diagnostic of particular materials (wood, metal), neither the colors nor textures were diagnostic of object identity. Thus, it remains an open question where and how chromatic texture and shape, when both are diagnostic of object identity, are processed such that their congruency enhances recall of either property in an object recognition task. Preliminary results from an fMRI adaptation paradigm show that areas in both LOC and CoS, as well as frontal areas, are activated more by images of 3-D-shaded objects with diagnostic surface color and texture than without, implying that some integration of features may occur at these levels, and in particular that LOC encodes not only shape but also surface properties where they are diagnostic of natural objects (Hurlbert, Ling, Pietta, & Vuong, 2009, 2010). 
Conclusions
Objects and scenes are normally chromatically very rich. A pixel-by-pixel look at pictures of simple objects generally reveals dramatic chromatic variations across their surfaces. Yet the majority of studies on memory color habitually simplify color stimuli as homogeneous color patches. Moreover, we live in a three-dimensional environment, in which 3-D shape cues are almost always available to contribute to object recognition and again this aspect of the real world is frequently disregarded by many color studies. This study confirms that memory color is influenced by both the chromatic heterogeneity of surfaces and by 3-D shape cues, demonstrating the importance of incorporating these cues in assessments of the effects of memory color. 
In answer to our initial questions, we reach the following conclusions. Although the mean value of memory hue is unaffected by polychromaticity and shape diagnostitcity (in every condition the mean memory hue tends to be “redder” and “bluer” than the natural object color), the presence of 3-D shape cues brings the mean memory hue significantly closer to the natural object mean hue. The absolute accuracy of memory color is improved by polychromaticity, shape diagnosticity, and three-dimensionality, relative to uniformly colored, generic, or 2-D shapes, respectively. The effect of three-dimensionality on accuracy tends to be stronger for natural than generic shapes. Shape diagnosticity improves the precision of memory color also and there is a trend for polychromaticity to do so as well. The size of the effect of polychromaticity on memory hue deviation, accuracy, or precision is enhanced by the presence of 3-D shape but is only marginally affected by the naturalness of the object's shape and only in the context of accuracy. The results suggest, though, that the improvement due to polychromaticity may reach ceiling, limiting further improvement from 3-D shape cues or shape diagnosticity. 
Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. Yet, in keeping with recent findings (Ling et al., in press; Olkkonen et al., 2008), the diagnosticity of the stimulus shape does not modulate the mean value of the memory hue itself, even when the object's natural polychromaticity is faithfully depicted. Instead, the presence of natural shape reduces the absolute error and variability in the mean hue setting. This improvement suggests that the diagnosticity of the stimulus shape enhances access to color representations which in turn suggests that shape and color attributes are synergistically encoded, if not integrated in a fixed representation. Thus, memory color of familiar objects behaves as if it is a stable, but imperfect, representation of reality, incorporating natural polychromaticity and more reliably accessed in the presence of diagnostic and 3-D shape cues. 
Supplementary Materials
Acknowledgments
Preliminary reports of these results were presented at VSS 2009 and appeared in its special issue (Vurro et al., 2009). Supported by EPSRC project grant (EP/D068738) to ACH. 
Commercial relationships: none. 
Corresponding author: Milena Vurro. 
Address: Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA. 
References
Bartleson C. J. (1960). Memory colors of familiar objects. Journal of the Optical Society of America, 50 (1), 73–77. [CrossRef] [PubMed]
Bartleson C. J. (1961). Color in memory in relation to photographic reproduction. Photographic Science and Engineering, 5 (6), 327–331.
Bloj M. G. Kersten D. Hurlbert A. C. (1999). Perception of three-dimensional shape influences colour perception through mutual illumination. Nature, 402 (6764), 877–879. [PubMed]
Bolles R. C. Hulicka I. M. Hanly B. (1959). Colour judgment as a function of stimulus conditions and memory colour. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 13 (3), 175. [CrossRef]
Bruner J. S. Postman L. Rodrigures J. (1951). Expectation and the perception of color. The American Journal of Psychology, 64, 216–227. [CrossRef] [PubMed]
Cant J. S. Arnott S. R. Goodale M. A. (2009). fMR-adaptation reveals separate processing regions for the perception of form and texture in the human ventral stream. Experimental Brain Research, 192 (3), 391–405. [CrossRef] [PubMed]
Cant J. S. Goodale M. A. (2007). Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cerebral Cortex, 17 (3), 713–731. [PubMed]
Cavina-Pratesi C. Kentridge R. W. Heywood C. A. Milner A. D. (2010). Separate processing of texture and form in the ventral stream: Evidence from fMRI and visual agnosia. Cerebral Cortex, 20 (2), 433–446. [CrossRef] [PubMed]
Chainay H. Humphreys G. W. (2001). The real-object advantage in agnosia: Evidence for a role of surface and depth information in object recognition. Cognitive Neuropsychology, 18 (2), 175–191. [CrossRef] [PubMed]
Delk J. L. Fillenbaum S. (1965). Differences in perceived color as a function of characteristic color. American Journal of Psychology, 78, 290–293. [CrossRef] [PubMed]
Duncker K. (1939). The influence of past experience upon perceptual properties. American Journal of Psychology, 52 (2), 255–265. [CrossRef]
Ehinger K. A. Brockmole J. R. (2008). The role of color in visual search in real-world scenes: Evidence from contextual cuing. Attention, Perception, & Psychophysics, 70 (7), 1366–1378, doi:10.3758/PP.70.7.1366. [CrossRef]
Eskew R. McLellan J. Guilianini F. (1999). Chromatic detection and discrimination. In Gegenfurtner K. Sharpe L. (Eds.), Color vision: From genes to perception (pp. 345–368). Cambridge, UK: Cambridge University Press.
Hansen T. Giesel M. Gegenfurtner K. R. (2008). Chromatic discrimination of natural objects. Journal of Vision, 8 (1): 2, 1–19, http://www.journalofvision.org/content/8/1/2, doi:10.1167/8.1.2. [PubMed] [Article] [CrossRef] [PubMed]
Hansen T. Olkkonen M. Walter S. Gegenfurtner K. (2006). Memory modulates color appearance. Nature Neuroscience, 9 (11), 1367–1368. [CrossRef] [PubMed]
Hayward W. G. (1998). Effects of outline shape in object recognition. Journal of Experimental Psychology—Human Perception and Performance, 24 (2), 427–440, doi:10.1037//0096-1523.24.2.427. [CrossRef]
Hayward W. G. Wong A. C. N. Spehar B. (2005). When are viewpoint costs greater for silhouettes than for shaded images? Psychonomic Bulletin & Review, 12 (2), 321–327, doi:10.3758/bf03196379. [CrossRef] [PubMed]
Hering E. (1964). Outlines of a theory of the light sense. Hurvich L. M. Jameson D. ( Trans.). Cambridge, MA: Harvard University Press. ( Original work published 1874).
Hiraoka K. Suzuki K. Hirayama K. Mori E. (2009). Visual agnosia for line drawings and silhouettes without apparent impairment of real-object recognition: A case report. Behavioural Neurology, 21, 187–192. [CrossRef] [PubMed]
Humphrey G. K. Goodale M. A. Jakobson L. S. Servos P. (1994). The role of surface information in object recognition: Studies of a visual form agnosic and normal subjects. Perception, 23 (12), 1457–1481. [CrossRef] [PubMed]
Hurlbert A. C. Ling Y. Pietta I. Vuong Q. C. (2009, October). Speckled strawberries and mottled bananas: Encoding diagnostic chromatic textures of objects in human visual cortex. Paper presented at the 39th Annual meeting of Society for Neuroscience, Chicago, IL.
Hurlbert A. C. Ling Y. Pietta I. Vuong Q. C. (2010). The representation of diagnostic chromatic texture in object-selective areas of human visual cortex. Perception, 39(ECVP Abstract Supplement), 157.
Hurlbert A. C. Vurro M. Ling Y. (2008). Colour constancy of polychromatic surfaces. Journal of Vision, 8 (6): 1101, http://www.journalofvision.org/content/8/6/1101, doi:10.1167/8.6.1101. [Abstract] [CrossRef]
James T. W. Culham J. Humphrey G. K. Milner A. D. Goodale M. A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: An fMRI study. Brain, 126, 2463–2475. [CrossRef] [PubMed]
Jin E. W. Shevell S. K. (1996). Color memory and color constancy. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 13 (10), 1981–1991. [CrossRef]
Kingdom F. A. A. Bell J. Gheorghiu E. Malkoc G. (2010). Chromatic variations suppress suprathreshold brightness variations. Journal of Vision, 10(10): 13, 1–13, http://www.journalofvision.org/content/10/10/13, doi:10.1167/10.10.13. [PubMed] [Article] [CrossRef] [PubMed]
Kinnear P. R. Sahraie A. (2002). New Farnsworth-Munsell 100 hue test norms of normal observers for each year of age 5-22 and for decades 30-70. British Journal of Ophthalmology, 86, 1408–1411. [CrossRef] [PubMed]
Ling Y. Hurlbert A. (2004). Color and size interactions in a real 3D object similarity task. Journal of Vision, 4 (9): 5, 721–734, http://www.journalofvision.org/content/4/9/5, doi:10.1167/4.9.5. [PubMed] [Article] [CrossRef]
Ling Y. Hurlbert A. C. (2005). Memory colours of real, familiar objects under changing illumination. Perception, 34 (2), 247.
Ling Y. Vurro M. Hurlbert A. C. (2008). Surface chromaticity distributions of natural objects under changing illumination. CGIV, 2008, 263–267.
Ling Y. Vurro M. Hurlbert A. C. ( in press). The effects of shape diagnosticity and depth cues on memory color of natural objects. Manuscript submitted for publication.
Lucassen M. P. Bijl P. Roelofsen J. (2008). The perception of static colored noise: Detection and masking described by CIE94. Color Research and Application, 33 (3), 178–191, doi:10.1002/col.20401. [CrossRef]
Mitterer H. de Ruiter J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19, 629–634, doi:10.1111/j.1467-9280.2008.02133.x. [CrossRef] [PubMed]
Naor-Raz G. Tarr M.J. Kersten D. (2003). Is color an intrinsic property of object representation? Perception, 32, 667–680, doi:10.1068/p5050. [CrossRef] [PubMed]
Newell F. N. Findlay J. M. (1997). The effect of depth rotation on object identification. Perception, 26 (10), 1231–1257, doi:10.1068/p261231. [CrossRef] [PubMed]
Newhall S. M. Burnham R. W. Clark J. R. (1957). Comparison of successive with simultaneous color matching. Journal of the Optical Society of America, 47 (1), 43–56. [CrossRef]
Oliva A. Schyns P. G. (2000). Diagnostic colors mediate scene recognition. Cognitive Psychology, 41 (2), 176–210. [CrossRef] [PubMed]
Olkkonen M. Hansen T. Gegenfurtner K. R. (2008). Color appearance of familiar objects: Effects of object shape, texture, and illumination changes. Journal of Vision, 8 (5): 13, 1–16, http://www.journalofvision.org/content/8.5.13, doi:10.1167/8.5.13. [PubMed] [Article] [CrossRef] [PubMed]
Ostergaard A. L. Davidoff J. B. (1985). Some effects of color on naming and recognition of objects. Journal of Experimental Psychology: Learning, Memory and Cognition, 11, 579–587. [CrossRef]
Ostergaard A. L. Davidoff J. B. (1988). The role of colour in categorical judgements. Quarterly Journal of Experimental Psychology, 20A, 533–544.
Perez-Carpinell J. Baldovi R. de Fez M. Castro J. (1998). Color memory matching: Time effect and other factors. Color Research & Application, 23 (4), 234–247. [CrossRef]
Rossion B. Pourtois G. (2004). Revisiting Snodgrass and Vanderwart's object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33, 217–236. [CrossRef] [PubMed]
Shevell S. K. Miller P. R. (1996). Color perception with test and adapting lights perceived in different depth planes. Vision Research, 36, 949–954. [CrossRef] [PubMed]
Siple P. Springer R. M. (1983). Memory and preference for the colors of objects. Perception & Psychophysics, 34, 363–370. [CrossRef] [PubMed]
Smith V. C. Pokorny J. (1975). Spectral sensitivity of the foveal cone photopigments between 400 nm and 500 nm. Vision Research, 15, 161–171. [CrossRef] [PubMed]
Steeves J. K. E. Humphrey G. K. Culham J. C. Menon R. S. Milner A. D. Goodale M. A. (2004). Behavioral and neuroimaging evidence for a contribution of color and texture information to scene classification in a patient with visual form agnosia. Journal of Cognitive Neuroscience, 16, 955–965. [CrossRef] [PubMed]
Stokes M. Fairchild M. D. Berns R. S. (1992). Precision requirements for digital color reproduction. ACM Transactions on Graphics, 11 (4), 406–422. [CrossRef]
Tanaka J. Presnell L. (1999). Color diagnosticity in object recognition. Attention, Perception, & Psychophysics, 61 (6), 1140–1153. doi: 10.3758/bf03207619. [CrossRef]
Tanaka J. Weiskopf D. Williams P. (2001). The role of color in high-level vision. Trends in Cognitive Sciences, 5 (5), 211–215. [CrossRef] [PubMed]
Therriault D. J. Yaxley R. H. Zwaan R. A. (2009). The role of color diagnosticity in object recognition and representation. Cognitive Processing, 10 (4), 335–342, doi:10.1007/s10339-009-0260-4. [CrossRef] [PubMed]
Vurro M. (2011). The role of chromatic texture and 3D shape in colour discrimination, memory colour, and colour constancy of natural objects. Unpublished doctoral dissertation, Newcastle University, Newcastle upon Tyne, UK.
Vurro M. Hurlbert A. C. (2011). The effect of shape and chromatic texture diagnosticity on color discrimination of natural objects. Journal of Vision, 11 (11): 401, http://www.journalofvision.org/content/11/11/401, doi:10.1167/11.11.401. [Abstract] [CrossRef]
Vurro M. Ling Y. Hurlbert A. C. (2007). The effect of shape on memory colour and colour constancy. Perception, 36(ECVP Supplement), 201, doi:10.1068/v070261.
Vurro M. Ling Y. Hurlbert A. C. (2009). Memory colours of polychromatic objects. Journal of Vision, 9 (8): 333, http://www.journalofvision.org/content/9/8/333, doi:10.1167/9.8.333. [Abstract] [CrossRef]
Wagemans J. De Winter J. Op de Beeck H. P. Ploeger A. Beckers T. Vanroose P. (2008). Identification of everyday objects on the basis of silhouette and outline versions. Perception, 37 (2), 207–244, doi:10.1068/p5825. [CrossRef] [PubMed]
Yamauchi Y. Uchikawa K. (2005). Depth information affects judgment of the surface-color mode appearance. Journal of Vision, 5 (6): 3, 515–524, http://www.journalofvision.org/content/5/6/3, doi:10.1167/5.6.3. [PubMed] [Article] [CrossRef]
Yendrikhovskij S. Blommaert F. de Ridder H. (1999). Representation of memory prototype for an object color. Color Research & Application, 24 (6), 393–410. [CrossRef]
Yip A. W Sinha P. (2002). Contribution of color to face recognition. Perception, 31, 995–1003. doi:10.1068/p3376. [CrossRef] [PubMed]
Figure 1
 
Schematic figure of the experimental apparatus (right side view); see Supplemental materials: Experimental apparatus for further details.
Figure 1
 
Schematic figure of the experimental apparatus (right side view); see Supplemental materials: Experimental apparatus for further details.
Figure 2
 
Photographs of the experimental objects and their chromaticity distributions. Top to bottom: Gala apple, banana, and carrot. Left column: chromaticity distributions of the experimental objects in EMG cone contrast space under D65 illumination. Each chromaticity is represented only once even if occurring more frequently in the distribution. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Black contour: the area containing 99% of the chromaticities of the object's chromatic distribution (OV). Black line: mean hue vector of the distribution (as described in text). Black star: mean chromaticity. Central column: digital photographs of natural objects used for chromatic texture extraction (see Texture extraction). Right column: digital photographs of gray-painted natural objects.
Figure 2
 
Photographs of the experimental objects and their chromaticity distributions. Top to bottom: Gala apple, banana, and carrot. Left column: chromaticity distributions of the experimental objects in EMG cone contrast space under D65 illumination. Each chromaticity is represented only once even if occurring more frequently in the distribution. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Black contour: the area containing 99% of the chromaticities of the object's chromatic distribution (OV). Black line: mean hue vector of the distribution (as described in text). Black star: mean chromaticity. Central column: digital photographs of natural objects used for chromatic texture extraction (see Texture extraction). Right column: digital photographs of gray-painted natural objects.
Figure 3
 
(A) Example of the center of a typical scene seen from the observer's point of view. (B) Solid test objects.
Figure 3
 
(A) Example of the center of a typical scene seen from the observer's point of view. (B) Solid test objects.
Figure 4
 
(A) Image of the Gala apple used in the experiments; (B) its extracted texture 2-D image without highlights and shading. Note that the images only approximately represent the actual colors due to their uncalibrated reproduction for this print.
Figure 4
 
(A) Image of the Gala apple used in the experiments; (B) its extracted texture 2-D image without highlights and shading. Note that the images only approximately represent the actual colors due to their uncalibrated reproduction for this print.
Figure 5
 
(A) The banana's natural distribution (yellow-brown) and its 180° rotated distribution (blue-violet) in EMG cone contrast space. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Each color is represented only once even if it occurs more frequently in the distribution. Bottom left insert: original banana image. Top right insert: altered banana image. (B) General procedure: illustration of the image sequence in a single trial. Note that the colors here are for illustration purposes only and are not as displayed in the experiment.
Figure 5
 
(A) The banana's natural distribution (yellow-brown) and its 180° rotated distribution (blue-violet) in EMG cone contrast space. Note that the color of each point only approximately indicates its actual color due to uncalibrated reproduction in this figure. Each color is represented only once even if it occurs more frequently in the distribution. Bottom left insert: original banana image. Top right insert: altered banana image. (B) General procedure: illustration of the image sequence in a single trial. Note that the colors here are for illustration purposes only and are not as displayed in the experiment.
Figure 6
 
Memory hue deviation: comparison of conditions within each experimental factor. Small grey circles indicate individual observer selections averaged over 10 identical trials, i.e., the mean selection of one observer for one condition, M = 28 × (4 × 3 − 1). Large filled symbols are the average over all 28 observers for each object/condition. (A) Chromatic surface factor (TEX vs. Uniform). Red squares: mean uniform condition (M = 4 × 3). Blue triangles: most saturated uniform condition (M = 4 × 3). (B) Shape diagnosticity factor (natural vs. generic). Each red square is the mean memory hue deviation for one object/chromatic surface/dimensionality condition combination (M = 3 × 3 × 2). (C) Shape dimensionality factor (3-D vs. 2-D). Each red square is the mean memory hue deviation for one object/chromatic surface/shape-diagnosticity condition combination (M = 3 × 3 × 2). Black line: unity line.
Figure 6
 
Memory hue deviation: comparison of conditions within each experimental factor. Small grey circles indicate individual observer selections averaged over 10 identical trials, i.e., the mean selection of one observer for one condition, M = 28 × (4 × 3 − 1). Large filled symbols are the average over all 28 observers for each object/condition. (A) Chromatic surface factor (TEX vs. Uniform). Red squares: mean uniform condition (M = 4 × 3). Blue triangles: most saturated uniform condition (M = 4 × 3). (B) Shape diagnosticity factor (natural vs. generic). Each red square is the mean memory hue deviation for one object/chromatic surface/dimensionality condition combination (M = 3 × 3 × 2). (C) Shape dimensionality factor (3-D vs. 2-D). Each red square is the mean memory hue deviation for one object/chromatic surface/shape-diagnosticity condition combination (M = 3 × 3 × 2). Black line: unity line.
Figure 7
 
Left column: Mean memory hue accuracy. Right column: Mean memory hue precision. Condition comparisons within each experimental factor. (A), (D) Chromatic surface factor. Blue filled triangle: texture condition compared with mean uniform condition. Red filled squares: texture condition compared with most saturated uniform condition. Each point is the average across 28 subjects for each object/shape combination. (B), (E) Shape diagnosticity factor. Each point represents the mean memory color accuracy for one object/shape dimensionality/chromatic surface condition combination averaged across 28 subjects. (C), (F) Shape dimensionality factor split by shape diagnosticity. Each point represents one object/chromatic surface condition combination averaged across 28 subjects. Red filled squares: Natural contour. Blue filled triangles: Generic contour. Error bars denote one standard error of the mean (SEM). Black line: unity line.
Figure 7
 
Left column: Mean memory hue accuracy. Right column: Mean memory hue precision. Condition comparisons within each experimental factor. (A), (D) Chromatic surface factor. Blue filled triangle: texture condition compared with mean uniform condition. Red filled squares: texture condition compared with most saturated uniform condition. Each point is the average across 28 subjects for each object/shape combination. (B), (E) Shape diagnosticity factor. Each point represents the mean memory color accuracy for one object/shape dimensionality/chromatic surface condition combination averaged across 28 subjects. (C), (F) Shape dimensionality factor split by shape diagnosticity. Each point represents one object/chromatic surface condition combination averaged across 28 subjects. Red filled squares: Natural contour. Blue filled triangles: Generic contour. Error bars denote one standard error of the mean (SEM). Black line: unity line.
Figure 8
 
Relationship between stimulus shape and (A) mean memory hue accuracy (%) and (B) mean memory hue precision (%), averaged across chromatic conditions and objects for all subjects. Error bars: one standard error of the mean. The trend shown in Panel A appeared similar between objects (Table 10).
Figure 8
 
Relationship between stimulus shape and (A) mean memory hue accuracy (%) and (B) mean memory hue precision (%), averaged across chromatic conditions and objects for all subjects. Error bars: one standard error of the mean. The trend shown in Panel A appeared similar between objects (Table 10).
Figure 9
 
First row: Average of the mean memory hue accuracy across all conditions for the three objects as a function of mean hue angle in EMG color space (A) and object color variegation (in LUV space) (B). Second row: Average of the mean memory hue precision across all conditions for the three objects as a function of mean hue angle in EMG color space (C) and object color variegation (OV) (in LUV space) (D).
Figure 9
 
First row: Average of the mean memory hue accuracy across all conditions for the three objects as a function of mean hue angle in EMG color space (A) and object color variegation (in LUV space) (B). Second row: Average of the mean memory hue precision across all conditions for the three objects as a function of mean hue angle in EMG color space (C) and object color variegation (OV) (in LUV space) (D).
Table 1
 
CIE xyY chromaticity coordinates of the chamber walls, under D65-metameric illumination from the data projector and side lamps, as described in the text.
Table 1
 
CIE xyY chromaticity coordinates of the chamber walls, under D65-metameric illumination from the data projector and side lamps, as described in the text.
Illuminant D65 Y(cd/m2) x y
Main display 7.061 0.313 0.327
Walls 3.991 0.314 0.332
Table 2
 
Size of objects in centimeters squared (approximate) of surface and in degrees of visual angle as viewed in experimental chamber from the subject's point of view.
Table 2
 
Size of objects in centimeters squared (approximate) of surface and in degrees of visual angle as viewed in experimental chamber from the subject's point of view.
Size Apple Banana Carrot
Degrees 3.81 × 3.15 7.36 × 3.10 8.72 × 1.67
Centimeters squared 52 62 60
Table 3
 
Mean hue of the object in EMG color space and object chromatic variegation (OV) in EMG color space and CIE LUV color space.
Table 3
 
Mean hue of the object in EMG color space and object chromatic variegation (OV) in EMG color space and CIE LUV color space.
Object Mean hue OV (EMG) OV (LUV)
Apple 70.12 2.32 × 10−3 3304.4
Banana 84.13 1.28 × 10−3 2823.3
Carrot 67.72 0.96 × 10−3 2256.8
Table 4
 
Mean memory hue deviations (degrees) for chromatic surface and shape conditions. Right-most column gives the difference in deviation between 2-D and 3-D shape conditions.
Table 4
 
Mean memory hue deviations (degrees) for chromatic surface and shape conditions. Right-most column gives the difference in deviation between 2-D and 3-D shape conditions.
Condition 3-D 2-D Mean 2-D − 3-D
TEX −1.96 −3.48 −2.72 1.52
MM −2.18 −7.45 −4.82 5.27
MS −3.13 −7.40 −5.26 4.27
Mean −2.42 −6.11
Natural −2.78 −5.51 −4.14
Generic −2.06 −6.71 −4.38
Table 5
 
Four-way ANOVA results for mean memory hue deviation. Significant effects indicated in bold.
Table 5
 
Four-way ANOVA results for mean memory hue deviation. Significant effects indicated in bold.
Source Ds, ds-error F P
Main effect
 Chromatic surface (TEX/Uniform) 1.2, 33 3.5 0.06
 Shape diagnosticity (Nat/Gen) 1.0, 27 0.091 0.76
Shape dimensionality (2-D/3-D) 1.0, 27 23 <0.01
Objects (Objs) 1.4, 38 6.8 <0.01
Main contrast
 TEX vs. MM 1.6, 42 2.5 0.15
TEX vs. MS 1.0, 27 6.2 <0.02
Main interactions
 TEX/Uniform and Nat/Gen 1.6, 42 2.0 0.10
TEX/Uniform and 2-D/3-D 1.7, 46 11 <0.01
TEX/Uniform and Objs 2.5, 70 16 <0.01
Interaction contrast
TEX/MM and 2-D/3-D 1.0, 27 34 <0.01
TEX/MS and 2-D/3-D 1.0, 27 8.7 <0.02
Table 6
 
Absolute angular error (degrees) split by factor conditions. Totals also expressed as mean memory hue accuracy (i.e., the percentage in brackets). Note that the smaller the error higher the accuracy.
Table 6
 
Absolute angular error (degrees) split by factor conditions. Totals also expressed as mean memory hue accuracy (i.e., the percentage in brackets). Note that the smaller the error higher the accuracy.
Condition 3-D 2-D 2-D − 3-D Nat Gen Mean
TEX 9.91 10.6 8.91 11.6 10.2 (94.6%)
MM 12.2 14.7 13.0 13.9 13.5 (92.5%)
MS 10.9 12.6 11.1 12.4 11.8 (93.5%)
Nat 10.1 11.9 1.80 11.0 (93.9%)
Gen 11.9 13.3 1.40 12.6 (92.9%)
Mean 11.0 (93.9%) 12.6 (92.9%)
Table 7
 
Four-way ANOVA results for memory hue accuracy. Significant effects indicated in bold.
Table 7
 
Four-way ANOVA results for memory hue accuracy. Significant effects indicated in bold.
Source Ds, ds-error F P
Main effect
Chromatic surface (TEX/Uniform) 1.7, 46 6.9 <0.01
Shape diagnosticity (Nat/Gen) 1.0, 27 6.7 <0.02
Shape dimensionality (2-D/3-D) 1.0, 27 6.1 <0.02
Objects (Objs) 1.4, 38 12 <0.01
Main contrast
 TEX vs. MM 1.0, 27 11 <0.02
 TEX vs. MS 1.0, 27 2.7 0.12
Main interactions
 TEX/Uniform and Nat/Gen 1.8, 48 2.5 0.10
 TEX/Uniform and 2-D/3-D 1.7, 46 2.3 0.12
TEX/Uniform and Objs 2.5, 70 16 <0.01
 2-D/3-D and Objs 1.9, 51 2.8 0.07
 NAT/GEN and Objs 1.6, 42 1.7 0.20
 NAT/GEN and 2-D/3-D 1.0, 27 0.09 0.77
Interaction contrast
TEX/MM and 2-D/3-D 1.0, 27 7.5 <0.02
 TEX/MS and 2-D/3-D 1.0, 27 1.2 0.27
TEX/MM and NAT/GEN 1.0, 27 5.3 <0.05
 TEX/MS and NAT/GEN 1.0, 27 1.9 0.18
Table 8
 
Range (degrees) split by factor conditions. Totals also expressed as mean memory hue precision (i.e., the percentage in brackets). Note that the smaller the range higher the precision.
Table 8
 
Range (degrees) split by factor conditions. Totals also expressed as mean memory hue precision (i.e., the percentage in brackets). Note that the smaller the range higher the precision.
Condition 3-D 2-D 2-D − 3-D Nat Gen Mean
TEX 22.6 22.2 21.2 23.7 22.4 (87.5%)
MM 24.2 20.7 21.5 23.4 22.5 (87.6%)
MS 24.7 22.3 21.5 25.5 23.5 (86.9%)
Nat 22.0 20.8 −1.20 21.4 (88.1%)
Gen 25.6 22.8 −2.80 24.2 (86.6%)
Mean 23.8 (86.8%) 21.8 (87.9%)
Table 9
 
ANOVA results for precision (range). Significant effects indicated in bold.
Table 9
 
ANOVA results for precision (range). Significant effects indicated in bold.
Source Ds, ds-error F P
Four-way ANOVA Main effect
 Chromatic surface (TEX/Uniform) 1.7, 46 0.002 0.58
Shape diagnosticity (Nat/Gen) 1.0, 27 11 <0.01
 Shape dimensionality (2-D/3-D) 1.0, 27 3.7 0.06
Objects (Objs) 1.6, 43 19 <0.01
Main contrast
 TEX-MM 1.0, 27 0.002 0.96
 TEX-MS 1.0, 27 0.63 0.43
Main interactions
 TEX/Uniform and Nat/Gen 1.8, 48 0.79 0.45
TEX/Uniform) and 2-D/3-D 1.9, 50 3.3 <0.05
TEX/Uniform and Objs 3.3, 90 6.2 <0.01
2-D/3-D and Objs 1.8, 49 4.1 <0.05
 NAT/GEN and Objs 1.8, 50 0.85 0.42
TEX/Unif, Nat/Gen, Objs 24, 100 3.97 <0.02
THREE-WAY ANOVA
 Natural – 2-D/3-D 1.0, 27 0.71 0.41
Generic – 2-D/3-D 1.0, 27 7.5 <0.02
 TEX – 2-D/3-D 1.0, 27 0.11 0.75
MM – 2-D/3-D 1.0, 27 8.1 <0.05
MS – 2-D/3-D 1.0, 27 3.1 <0.05
Table 10
 
ANOVA analysis for object type for accuracy and precision. Only significant value listed.
Table 10
 
ANOVA analysis for object type for accuracy and precision. Only significant value listed.
Ds, ds-error F P
Source - accuracy
 Apple vs. Banana 1.0, 27 23 <0.02
 Apple vs. Carrot 1.0, 27 6.3 <0.02
 Banana vs. Carrot 1.0, 27 8.3 <0.02
Source - precision
 Apple vs. Banana 1.0, 27 0.82 0.38
 Apple vs. Carrot 1.0, 27 25 <0.01
 Banana vs. Carrot 1.0, 27 22 <0.01
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×