Open Access
Article  |   April 2016
Perceived depth from shading boundaries
Author Affiliations
Journal of Vision April 2016, Vol.16, 5. doi:10.1167/16.6.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Juno Kim, Stuart Anstis; Perceived depth from shading boundaries. Journal of Vision 2016;16(6):5. doi: 10.1167/16.6.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Shading is well known to provide information the visual system uses to recover the three-dimensional shape of objects. We examined conditions under which patterns in shading promote the experience of a change in depth at contour boundaries, rather than a change in reflectance. In Experiment 1, we used image manipulation to illuminate different regions of a smooth surface from different directions. This manipulation imposed local differences in shading direction across edge contours (delta shading). We found that increasing the angle of delta shading, from 0° to 180°, monotonically increased perceived depth across the edge. Experiment 2 found that the perceptual splitting of shading into separate foreground and background surfaces depended on an assumed light source from above prior. Image regions perceived as foreground structures in upright images appeared farther in depth when the same images were inverted. We also found that the experienced break in surface continuity could promote the experience of amodal completion of colored contours that were ambiguous as to their depth order (Experiment 3). These findings suggest that the visual system can identify occlusion relationships based on monocular variations in local shading direction, but interprets this information according to a light source from above prior of midlevel visual processing.

Introduction
We visually perceive surface properties of three-dimensional (3D) shape, albedo, and transparency from two-dimensional (2D) retinal images. The experience of different physical properties of objects depends on the visual system's ability to correctly attribute luminance variations to their physical causes. For example, how does our brain differentiate a contour caused by a change in albedo (i.e., pigmentation) from an occluding boundary? It has been proposed that shading gradients adjacent to contours are critical for differentiating shading from pigmentation and specular reflectance (Ben-Shahar & Zucker, 2001; Kim, Marlow, & Anderson, 2011, 2014), and changes in shading flow have been associated with occluding boundaries (Ben-Shahar, Huggins, & Zucker, 2002). Here, we consider whether changes in shading direction contribute to perceptual organization of surfaces in depth. 
The structure of images depends on complex interactions between surface reflectance, 3D shape, and the structure of the light field. Changes in a surface's reflectance and its orientation relative to the light source have differential effects on the shading in an image. Figure 1 shows a surface illuminated from the upper right generating an occlusion boundary where a tile hides a distant bumpy background. This contour boundary is different from the reflectance boundary generated where two tiles meet at approximately the same location in depth. One task for vision science is to explain how we visually classify these contours in images and then use this information to perceptually organize objects in depth with apparent surface properties. 
Figure 1
 
Image contours generated by the occluding boundary (A) and reflectance boundary (B) of a planar surface. Transformation on right shows the isophotes for the same image. Note how the orientation of isophotes is discontinuous across an occluding boundary, but mostly preserved across reflectance boundaries.
Figure 1
 
Image contours generated by the occluding boundary (A) and reflectance boundary (B) of a planar surface. Transformation on right shows the isophotes for the same image. Note how the orientation of isophotes is discontinuous across an occluding boundary, but mostly preserved across reflectance boundaries.
Researchers have identified potential invariants that could be easily computed across a variety of illumination and viewing conditions (Breton & Zucker, 1996; Koenderink & van Doorn, 1980). Koenderink and van Doorn (1980) identified photometric invariants for the computation of 3D shape. They found that the structure of isophotes—contours of constant image intensity—tend to vary in similar ways across changes in lighting direction. Breton and Zucker (1996) further observed that the contours generated by the boundaries of cast shadows tend to differ from attached shadows congruent with isophotes in diffuse shading. They proposed that the identification of attached shadows can be used to resolve the primary lighting direction and identify diffuse shading. Fleming, Torralba, and Adelson (2004) proposed that isophotes could be computed using visual filters similar to those of the human visual system. These filters estimate the local polar direction of shading gradients in the image orthogonal to the direction of isophotes. These shading directions are known as orientation fields
Although the shading can be contaminated by pigmentation and other sources of luminance variation, orientation fields were originally proposed to help differentiate shading independent of reflectance boundaries. Ben-Shahar and Zucker (2001) found that orientation fields are preserved across reflectance boundaries, irrespective of changes in the direction of illumination. These computations therefore selectively extract information necessary for the recovery of shape from shading. Kim et al. (2014) found that similar computations can account for many observer judgments of surface pigmentation. They controlled the orientation of reflectance boundaries relative to shading isophotes, and found that imposing pigment-shading incongruence increased both estimates of pigmentation contrast and model differences in orientation fields at reflectance boundaries. 
Orientation flows can provide estimates not only of surface curvature and texture but also of depth across smooth occlusion boundaries. Ben-Shahar et al. (2002) showed that shading attributed to shape can be distinguished from occlusion boundaries based on changes in orientation of shading across an occluding contour that do not occur across reflectance contours. They showed that strong shading gradients are generated toward self-occlusion boundaries that differ in orientation from shading across occluded surface regions. Information about the strength of gradients running orthogonally toward the contour could be used to perceptually attribute one region to the foreground and the other to the background (Palmer & Ghose, 2008). In their most recent work, Ghose and Palmer (2016) showed that perceived occlusion increased with the angle formed between the orientations of isophotes generated by the occluded and occluding surfaces (or equiluminance angle). Higher equiluminance angles increase the strength of “gradient cuts” on one side of the occluding contour in the image. They showed that increasing the equiluminance angle monotonically decreased the likelihood of perceiving the side with the gradient cut as that of a surface in the foreground. Instead, the surface on the side forming the extremal edge is increasingly seen as the foreground. 
Perceived figure-ground distinctions may not solely depend on the formation of extremal edges and gradient cuts they generate. Situations like Figure 1 often arise where gradient cuts are imposed in the internal shading of foreground surfaces. This can also occur when holes are punched into the surface of a curved foreground object. In these situations, no extremal edges are generated; the shading isophotes of both the foreground and background can be incongruent with the orientation of the occluding boundary. What conditions are imposed in the visual interpretation of these edge constraints? We conducted three experiments to examine how these variations in shading influence perceived surface depth order across contours. The first experiment examined the dependence of perceived depth more generally on difference in shading direction per se, a property we term delta shading. In the absence of extremal shading, there is an inherent ambiguity in inferring figure from ground. Experiment 2 examined whether this ambiguity is resolved by an assumed lighting from above prior. We examined the effect of this top-down bias in the interpretation of amodal completion. 
Experiment 1
Previous studies showed that occlusion contours generated by the smooth gradient at the bounding contour of an object are geometrically different from the sharper changes in luminance generated by reflectance edges (Ben-Shahar et al., 2002). Indeed, related research showed that frontally-illuminated foreground convex objects tends to generate shading gradients that run toward the object's bounding contour—an extremal edge—that itself can help disambiguate figure-ground relationships (Palmer & Ghose, 2008). In other words, the foreground surface will tend to generate greater congruence of isophotes with respect to the bounding contour. However, rather than depending on smooth changes in shading toward an edge, the perception of occlusion boundaries could depend more generically on sharp changes in shading direction per se (e.g., gradient cuts, Ghose & Palmer, 2016). 
We examined how changing the direction of shading gradients affected the perceptual classification of internal luminance variations within the object's global bounding contour. This revealed how perceived occlusions depended on differences in shading direction across an edge. We refer to this difference in the angular orientation of shading flow either side of an edge as the delta shading angle (δ). Experiment 1 varied delta shading across reflectance contours of textured surfaces and measured perceived occlusion. To this end, we generated artificial hybrid surfaces, in which different parts of an object have spatial gradients in different directions through illumination from differently oriented light sources. We investigated the depth perception engendered by these virtual 3D objects. If the classification of edges as occlusion boundaries depends on incompatibility in directions of shading flows, then perceived certainty of a change in depth across an edge should increase with the difference in shading angle. 
Materials and method
Observers
Our six adult observers were naïve to the objective of the study. All of our experiments conformed to the ethical principles at the University of California San Diego and to the Declaration of Helsinki. 
Stimuli
We generated a 3D mesh of an egg-shaped surface in Blender by displacing the vertices of a geodesic sphere by the values of random cloud noise. The object was viewed from the front and covered in a texture produced by binarizing a cloud noise texture in Blender, somewhat resembling military camouflage paint. The simulated reflectance of the two pigmented regions was R = 0.4 and R = 0.6. Stimulus images were generated using the method outlined in Figure 2. The illumination direction was systematically varied between 0° (above) and 180° (below) the object to generate different angular orientation of shading flow across the surface, where symbols p and q in Figure 2 denote the two different reflectance regions illuminated from different directions. 
Figure 2
 
Method used to generate shading offset across reflectance boundaries. Pigmented regions (variations in reflectance) were imposed across an egg-shaped object using a binarized cloud-noise texture. The object was illuminated from different directions by a hemi-lamp in Blender (ranging clockwise in orientation between 0° and 180°). The separate light and dark regions in albedo (p and q, respectively) were isolated into separate images (exemplified here with p0 and q135) by subtracting the luminance of a binarized mask for each textural element. Individual stimulus images with different levels of delta shading were the combined isolated shading profiles for different illumination directions, as shown on the right with the 135° condition (δ135).
Figure 2
 
Method used to generate shading offset across reflectance boundaries. Pigmented regions (variations in reflectance) were imposed across an egg-shaped object using a binarized cloud-noise texture. The object was illuminated from different directions by a hemi-lamp in Blender (ranging clockwise in orientation between 0° and 180°). The separate light and dark regions in albedo (p and q, respectively) were isolated into separate images (exemplified here with p0 and q135) by subtracting the luminance of a binarized mask for each textural element. Individual stimulus images with different levels of delta shading were the combined isolated shading profiles for different illumination directions, as shown on the right with the 135° condition (δ135).
Rather than using a collimated light source, we illuminated our surfaces using a hemispheric (hemi-) lamp in Blender 3D, which is a similar lighting model to that described previously (Horn & Sjoberg, 1979). This lighting model is more diffuse than a collimated light source and models more closely the illumination on a cloudy day (Langer & Bülthoff, 2000) that eliminates the generation of cast shadows. However, this lighting model was appropriate for illuminating the convex surfaces used in the current study, as it ensured soft directional gradients were preserved across the surface, and throughout attached shadow regions. 
We isolated the light (p) pigmented region into a separate image by eliminating the luminance in all of the dark (q) regions. This elimination was performed by subtracting the luminance of a rendering where the q region was set to white and the p region set to black. The alternative isolation of q regions was also performed by eliminating the image intensity in all p regions. We parametrically varied the angular differences in shading direction across pigmentation boundaries by additively recombining isolated p and q surface regions illuminated from different lighting directions. We refer to the resulting change in shading direction generated by the imposed difference in the angles of lighting as delta shading. 
Delta shading was parametrically increased by combining shading of lighter reflectance regions illuminated from the zenith direction with darker reflectance regions illuminated from directions deviating from zenith by up to 180°. Figure 3 (upper row) shows images with different levels of delta shading across textural contours. The lighter parts (p regions) are all the same and are lit from above (0°), and the dark parts are taken from the alternate regions (q regions), being lit from 45°, 90°, 135°, or 180°, respectively. For comparison, Figure 3 (lower row) shows the original images with the same lighting direction generating shading across regions p and q, which preserved shading across texture edges. Each of these has zero delta shading (i.e., δ = 0) and is veridically seen as the convex front surface of a solid object. Whereas Figure 3 (lower row) all look like convex surfaces, Figure 3 (upper row) look as though the two surface regions increasingly separate out perceptually into two different surfaces lying in depth. The front surface appears convex, whereas the rear appears as an independent background seen through apparent “gaps” in the front surface. The borders that were formerly interpreted as pigmentation boundaries are here reinterpreted as occlusion boundaries between separate surface regions p and q
Figure 3
 
Stimulus images with shading offset used in Experiment 1. Upper row: Images generated by increasing the difference in shading angle, delta (δ), between surface regions flanking reflectance contours (δ = px + qy, where x = 0° and y takes on larger angular values up to 180°). These appear to separate out into two surfaces lying at different depths. Lower row: For comparison, the original surfaces illuminated with different oblique angles of illumination (values in degrees). All have zero delta shading, and all appear as solid closed surfaces.
Figure 3
 
Stimulus images with shading offset used in Experiment 1. Upper row: Images generated by increasing the difference in shading angle, delta (δ), between surface regions flanking reflectance contours (δ = px + qy, where x = 0° and y takes on larger angular values up to 180°). These appear to separate out into two surfaces lying at different depths. Lower row: For comparison, the original surfaces illuminated with different oblique angles of illumination (values in degrees). All have zero delta shading, and all appear as solid closed surfaces.
Procedure
Observers were initially briefed and shown an image of an “L-shaped” surface similar to that presented in Figure 1. Their attention was drawn to how surface regions across edge contours can vary in depth or remain continuous in depth across texture boundaries. They were informed that they would need to select which of two images appeared to contain greater variation in depth across the surface in the image. Hence, stimulus images were presented using the paired comparisons method in a two-alternative forced-choice design. 
Images were presented on a 15-in. LCD flat-panel display (luminance range = 4.8 to 76.4 cd/m2; Toshiba Japan). Surface regions in images ranged in size both horizontally and vertically by ±3.2° visual angle. A total of 20 trials of paired presentations were presented to each observer (5 × 5 − 5 trials). These were the images in Figure 3 (lower row) in addition to the original stimulus image where both surface regions were illuminated by the upright light field. Thus, the spatial gradients across contours in an image differed by δ = 0°, 45°, 90°, 135°, or 180°. The trials were randomized and counterbalanced across the display. Each trial was presented for no more than 5 s to ensure that observers kept to limited time constraints on making their judgments. They pressed two designated keys to indicate which image appeared to have more salient depth variation. 
Results and discussion
Figure 4 plots means and standard errors of perceived depth variation as a function of imposed differences in shading angle. A one-way ANOVA found a significant main effect of delta shading angle on perceived change in surface depth, F(4, 24) = 29.38, p < 0.00001. This result suggests that increasing the difference in direction of shading flow on either side of the edge monotonically increases the likelihood of its perceptual classification as an occluding boundary. 
Figure 4
 
Perceived change in depth plotted as a function of delta shading angle. Mean probability estimates of perceived change in figure-ground depth order for surface images with increasing delta in shading angle shown in blue. Dashed lines outline individual observer probability estimates. Error bars are standard errors of the mean.
Figure 4
 
Perceived change in depth plotted as a function of delta shading angle. Mean probability estimates of perceived change in figure-ground depth order for surface images with increasing delta in shading angle shown in blue. Dashed lines outline individual observer probability estimates. Error bars are standard errors of the mean.
Thus, differences in shading direction across edge boundaries appear to be used to make monocular distinctions between separate surfaces situated at different distances in depth. Indeed, observers informally reported that the regions with deviated shading generated by more oblique or inverted lighting directions appeared to situate farther in depth than regions illuminated by the light source from above. We quantified these potential figure-ground distinctions triggered by delta shading further by obtaining more specific judgments from observers in the next experiment. 
Experiment 2
The previous experiment showed that perceived differences in figure-ground surface depth order depends on local angular differences between adjacent directions of shading flow. These delta shading variations appear to provide strong cues for classifying a given image contour as an occlusion boundary. But how do observers determine which side of the contour is the near (figure) surface and which is the far (ground) surface? For self-occluding boundaries, Palmer and Ghose (2008) proposed that the side with the steeper shading gradient orthogonal to the edge contour (extremal shading) owns the edge contour. However, our displays contained similar gradients of similar steepness on either side of edge contours, and varied widely in shading direction relative to the edge. Although it is equally likely that either side could own the edge contour, it is possible that the attribution of one region as figure and the other as ground depends on perceived convexity or concavity in surface curvature. 
Previous research has suggested that perceived curvature depends on an illumination from above prior (Luckiesh, 1916, 1922; Mamassian & Goutcher, 2001; Metzger, 1975; Ramachandran, 1988). These studies found that potential depth-reversal ambiguities are resolved by the perceptual interpretation of gradients based on an assumed primary lighting direction from above. It is possible the edge contour is then assigned to the surface region that is perceptually estimated as convex, which also tends to be perceived as a foreground occluding figure (Peterson & Salvagio, 2008). In Experiment 2, we determined whether the attribution of edge contours based on an illumination from above bias dictate figure-ground distinctions in our displays with increased delta shading. 
Materials and method
Observers
Our seven adult observers were the same as those in Experiment 1, except for one additional observer who was completely naïve to the study. All were naïve to the purposes of the current experiment. 
Stimuli
We used the same set of five images from Experiment 1 ranging in delta shading angle between 0° and 180°. However, images were presented either upright or inverted by 180°. Figure 5 shows upright and inverted versions of the original image with zero delta shading and the image with maximum delta shading of 180°. 
Figure 5
 
Upright and inverted versions of the stimulus images. (a) the original image with zero delta shading presented upright (left) and inverted (right). This is the same image as the zero delta shading condition in Experiment 1. (b) the image with maximum delta shading at 180°, also presented upright and inverted. This image is the same as the 180° delta shading condition in Experiment 1.
Figure 5
 
Upright and inverted versions of the stimulus images. (a) the original image with zero delta shading presented upright (left) and inverted (right). This is the same image as the zero delta shading condition in Experiment 1. (b) the image with maximum delta shading at 180°, also presented upright and inverted. This image is the same as the 180° delta shading condition in Experiment 1.
Procedure
Instead of the 2AFC task used in the previous experiment, we implemented an open 3AFC task that required the observer to select image regions that appeared as occluded surfaces situated behind foreground surface regions. If the surface appeared to exhibit no step change in depth across the image (i.e., appeared to be an unbroken closed surface), then observers simply clicked the mouse cursor on the black background. Otherwise, the observer was instructed to move the cursor to a surface region that appeared to fall farthest in depth. Hence, the observer endeavored to click on holes into deep surface regions, if any were perceived. We recorded which of the three regions (black background, lighter p regions, or darker q regions) the mouse cursor was clicked over. We then computed the probability each region was selected when presented with the array of upright and inverted stimulus images in random order. This was computed by dividing the number of times a response type was made for a given image condition by the number times the image was presented. 
Results and discussion
Figure 6 shows stacked area plots for the proportions of response types (selecting either p or q regions as appearing farther in depth or neither of those) made for each of the upright and inverted stimulus images. Separate axes are used to show the probability of each response to each of the images varying in delta shading and uprightness. There was a greater relative proportion of responses favoring darker-albedo q regions when presented with upright images. The dominant proportion of responses switched in favor of the lighter-albedo p regions as appearing farther in depth when images were inverted. 
Figure 6
 
Stacked area plots show proportion of surface region perceived in depth. (A) Shaded regions show proportions of responses favoring p and q regions in upright images. (B) Responses favoring p and q regions in inverted images. The “neither” zone indicates proportion of images selected as containing no stepped variation in depth across image contours. Arrows indicate the direction of shading flow from light to dark (i.e., direction of lighting across the convex surface geometry). The largest proportion of responses to p and q zones consistently favors image regions with inverted lighting at higher levels of delta shading (upward pointing arrows).
Figure 6
 
Stacked area plots show proportion of surface region perceived in depth. (A) Shaded regions show proportions of responses favoring p and q regions in upright images. (B) Responses favoring p and q regions in inverted images. The “neither” zone indicates proportion of images selected as containing no stepped variation in depth across image contours. Arrows indicate the direction of shading flow from light to dark (i.e., direction of lighting across the convex surface geometry). The largest proportion of responses to p and q zones consistently favors image regions with inverted lighting at higher levels of delta shading (upward pointing arrows).
Separate two-way ANOVAs were performed on data obtained for responses to the p and q regions. For selection of the darker q regions, there was a significant main effect of delta shading angle on perceived surface depth, F(4, 24) = 4.36, p < 0.01. There was also a significant effect of image orientation on perceived depth attributions, F(1, 6) = 9.14, p < 0.05. There was no significant interaction effect between image orientation and delta shading on perceived depth variations, F(4, 24) = 5.08, p = 0.005. These results reveal that observers perceived the darker albedo regions as farther in depth when images were upright compared with inverted. For selection of the lighter p regions, there was a significant main effect of delta shading angle on perceived surface depth, F(4, 24) = 6.95, p < 0.001. There was also a significant main effect of image orientation on perceived depth attributions, F(1, 6) = 11.11, p < 0.05. A significant interaction effect was found between image orientation and delta shading on perceived depth variations, F(4, 24) = 3.33, p < 0.05. This interaction effect indicates different rates of increase in preference for selecting p as a function of delta shading between upright and inverted presentations. 
The effect of delta shading on perceived depth is supported by the consistent main effect that delta shading angle increased perceived depth variations. The results with selection of p regions are the reciprocal of those for selection of darker q regions; preference for p and q regions is seen to switch between upright and inverted images. The reciprocal effects of image inversion on perceived surface interposition in depth supports the interpretation that the visual system imposes an assumed light source from above when attributing contour ownership to foreground and background surface fragments. We find that physically convex surface fragments that are consistent with lighting from below are perceived as farther in depth than similarly convex surface regions compatible with lighting from above. 
It should be noted that although they are both physically convex, only the p or the q region is perceived as such in the most extreme case of delta shading. For example, at the delta shading angle of 180° when images were inverted, observers did not always avoid selecting the region illuminated from above as appearing farther in depth. In this condition, observers selected the region illuminated from above as appearing behind on approximately 25% of presentations. In other words, they perceived the convex region illuminated from below in the foreground in less than 50% of trials, which could indicate some residual potential for perceived convexity to override a relatively stronger competing lighting from above prior. 
Experiment 3
The previous experiments showed that the perceived attribution of edge contours to occluding boundaries generated by foreground objects depends on incongruence in shading direction—delta shading—and its interpretation based on an assumed light source from above. The variation in shading consistent with lighting from below was perceived to be farther in depth. Hence, rather than appearing continuous with varying convexity and concavity, the increased delta shading at edge contours appears to generate a perceived break in surface continuity. 
One potential approach to exploring the role of delta shading in the appearance of surface discontinuity is the perceived strength of amodal completion. Nakayama, Shimojo, and Silverman (1989) proposed that the edges formed at occlusion boundaries will be intrinsically owned by the occluding foreground surface. These same edge contours are therefore classified as extrinsic to occluded contours of surfaces situated farther in depth. These occluded structures will then appear continuous, completing amodally behind the foreground surface. To verify whether high delta shading generates perceived occlusion boundaries attributed to breaks in surface continuity, we generated ambiguous circular contours that could be perceived in the foreground and discontinuous or in the background and amodally continuous. 
Materials and method
Observers
A total of six observers participated in this experiment. All had previously participated in Experiments 1 and 2, but were naïve to the objective of the current experiment. 
Stimuli
We rendered images of a spherical object with the light source either above or below, consistent with the extreme delta shading angles of these previous experiments. As outlined in Figure 7, we separated the differently illuminated versions of the sphere into separate regions by subtracting the luminance of an appropriate template. Both had the same simulated albedo (R = 0.6) and size (±3.7° visual angle). Red and blue rings of different radii were then added to the two image regions, which were subsequently combined (red = ±2.2° and blue = ±2.0° visual angle). Movie 1 shows an animation of the amoeba-like structure spatially shaded from light to dark, against an adjacent spherical shading pattern, but in the opposite direction (180° out of phase). 
Figure 7
 
Method used to create the delta-shaded image of Experiment 3. Circles of different radii were initially drawn on images of spheres illuminated from above and below. Complementary cog-shaped regions were punched out by image subtraction, and then, alternately combined into the final image on the right.
Figure 7
 
Method used to create the delta-shaded image of Experiment 3. Circles of different radii were initially drawn on images of spheres illuminated from above and below. Complementary cog-shaped regions were punched out by image subtraction, and then, alternately combined into the final image on the right.
 
Movie 1.
 
Amodal completion varies with shading orientation. The cog-like central contour defines regions with delta shading offset by up to 180°. As the image is rotated over a 0° to 360° range, the perceived depth order and continuity of the blue and red circular contours appears to switch. Each ring is completed amodally when perceived farther in depth than the alternate ring, which appears as dashed contours on the front of an amoeboid surface perceived in the foreground.
Sometimes the amoeba looked like a convex island in front of a shaded disk situated in the background. In this condition the outward-pointing fingers of the rotating island appeared to be marked with short, tangential blue stripes that rotated with the fingers and looked like a moving, dashed blue circle. The red circle appeared as a stationary, complete (not dashed) red circle that was amodally completed behind the rotating island. 
Alternatively, the amoeboid shape could look like an irregular hole cut into a rotating convex curved surface patch, which allowed a view of a separate background interior. In this condition the rotating convex surface seemed to have short fingers that jutted inwards around the edges of the hole. These fingers appeared to be marked with short, tangential red stripes that rotated with the fingers and looked like a moving, dashed red circle. Now the blue circle appeared to lie behind the hole, possibly on a background surface, as a stationary, complete (not dashed) blue circle that was amodally completed behind the curved convex surface perceived in the foreground. 
Hence, the colored “rings” either appeared to pass (amodally) behind finger-like structures, or to stick like texture on the front surface of each finger. Continuous rotation of the image in Movie 1 is seen to informally generate alternations in the appearance of the blue and red rings as amodally completed. 
Procedure
Two image animations were set up side by side. One was upright (0°) and the other was inverted (180°). The images were oscillated around the viewing axis by ±5° at approximately 1.0 Hz to make the task more engaging. As shown in Figure 8 in the following material, a red or blue reference at the bottom of the display instructed observers on which contour they needed to attend to on a given trial. The observer had up to 5 s to study the two animations to decide which looked like it contained a ring of the same color as the target and appeared more like a completed ring situated behind the finger-like radial structures. Observers used the LEFT and RIGHT arrow buttons on the keyboard to indicate their response, which were recorded for later analysis for each of the 12 randomized and counterbalanced trials (2 colored targets × 2 orientations × 3 repeats). 
Figure 8
 
Layout of the two-alternative forced-choice task in Experiment 3. The same stimulus image differed by 180° in orientation across the screen. The observer's task was to select the side of the display containing the ring that was the same color as the target in the lower center and appeared as continuous behind finger-like surface regions.
Figure 8
 
Layout of the two-alternative forced-choice task in Experiment 3. The same stimulus image differed by 180° in orientation across the screen. The observer's task was to select the side of the display containing the ring that was the same color as the target in the lower center and appeared as continuous behind finger-like surface regions.
Results and discussion
Figure 9 shows mean probabilities of perceiving the red and blue contours as amodally completed for the upright and inverted versions of the stimulus image. 
Figure 9
 
Bar plot of mean probabilities for experiencing contours amodally. Separate pairs show responses to images that were upright or inverted (see lower image insets). Blue bars indicate probability of perceiving blue contours as amodally completed rings. Red bars indicate probability of perceiving red contours as amodally completed rings. Note how preference for different-colored rings switches between changes in image orientation. Error bars are standard errors of the mean.
Figure 9
 
Bar plot of mean probabilities for experiencing contours amodally. Separate pairs show responses to images that were upright or inverted (see lower image insets). Blue bars indicate probability of perceiving blue contours as amodally completed rings. Red bars indicate probability of perceiving red contours as amodally completed rings. Note how preference for different-colored rings switches between changes in image orientation. Error bars are standard errors of the mean.
For upright images, repeated-measures t tests showed that observers were significantly more likely to amodally complete the blue than the red contours (t6 = 3.27, p < 0.05), but for the inverted images they were significantly more likely to amodally complete the red than the blue contours (t6 = 3.04, p < 0.05). 
Although we use blue and red rings to define the two occluding regions, the perceived depth does not appear to be due to potential chromostereopsis. Chromostereopsis is the phenomenon of perceived depth between red-blue surfaces, which is likely caused by differences in the refraction of long and short wavelength light by the cornea (Thompson, May, & Stone, 1993). Red surfaces will tend to pop out relative to an adjacent blue surfaces perceived as sunken. This effect depends on chromatic polarization and therefore is invariant to image orientation, and therefore does not appear to be the cause of the effect we observe; the pattern of depth relationships we observed were found to reverse when our stimulus images were inverted. 
These results are highly consistent with the view that delta shading generates the appearance of occlusion boundaries that define surfaces lying at different depths. The effects of image orientation on amodal completion further suggests that changes in the attribution of edge contours defined by delta shading as intrinsic to specific foreground surfaces causes the appearance of alternately colored contours to appear amodally completed. These findings are discussed in the following material. 
General discussion
We sought to determine the dependence of perceived occlusion on differences in the direction of image shading generated by adjacent surface patches, which we termed delta shading. Experiment 1 showed that increasing the amount of delta shading could cause an edge formerly attributed to a change in albedo to start looking as though it was caused by an abrupt change in scene depth. The discontinuity of surface structure with locally-increased delta shading angle was evident in Experiment 2 where we instructed observers to point out which surface regions appeared farther away in depth. Observers predominantly selected surface regions that were illuminated from below, implying that delta shading not only drives the classification of edge contours as occlusion boundaries, but that the ownership of such edges is determined by perceived surface concavity and an illumination from above prior. To further verify that the perceived variation in depth was consistent with an apparent surface discontinuity, we instructed observers to perform an amodal completion task (Experiment 3). Results showed that contours could complete amodally behind surface regions perceived in the foreground, implying that delta shading generates a clear occlusive discontinuity in perceived surface structure. 
The differentiation of occluding contours from reflectance contours appears to be driven by differences in the direction of shading across image contours. Ben-Shahar and Zucker (2001) showed that reflectance edges form boundaries between surface regions that differ in albedo. They demonstrated these boundaries do not significantly alter smooth gradients across edges that are needed to compute shape from shading. However, occlusion boundaries do generate differences in the amplitude and direction of smooth gradients running toward the contour (Ben-Shahar et al., 2002), which can terminate with the imposition of abrupt gradient cuts on at least one side of the contour (Ghose & Palmer, 2016). Consistent with this view, we found that edge contours could be switched perceptually from reflectance edges to occlusion edges simply by varying the lighting direction across the edge, and hence, the shading direction across surfaces abutting the reflectance boundary. 
Previous perceptual studies have found that the attribution of edges classified as occlusion boundaries needs further assignment regarding which surface is perceived in front. The region that is favored depends on various gestalt image properties (see Wagemans et al., 2012, for a review). Also, adding shading consistent with surface convexity can break figure-ground ambiguities, transferring ownership of the potential occluding edge to the region with the steepest shading gradients (Ghose & Palmer, 2010, 2016; Palmer & Ghose, 2008). These “extremal edges” were defined as contours associated with image regions of steepest shading running in the direction toward the occluding contour (Ghose & Palmer, 2010). Ghose and Palmer (2016) extended this work to show that the likelihood of perceiving a change in surface position in depth increased as a function of the difference in orientation of isophotes (i.e., equiluminance contours) between the foreground and background surfaces abutting an occluding contour. Our results show that the attribution of edge contours to occlusion depended more generally on the difference in orientation of the gradients, irrespective of the direction in which they ran relative to the perceived occluding contour. However, the ownership of the classified edge was ambiguous and depended on shape-from-shading computations that invoked an assumed illumination from above prior and/or a convexity bias, indicating the role of midlevel visual processing. 
We found reliable evidence to suggest that perceived surface discontinuity depends on midlevel representations of surface organization. Experiment 2 revealed that simply changing the orientation of an image with significant delta shading would cause image regions perceived as convex surfaces to switch to look like an independent surface or void situated in the background. Experiment 3 demonstrated in an independent task that the apparent switch in surface depth order was strong enough to drive amodal completion. Previously, Nakayama et al. (1989) showed that amodal completion depends on constraints that cause shared edge contours between occluded and occluding figures to be assigned intrinsic ownership to an apparent foreground surface. Once assigned, the surface extrinsic to the edge will appear as amodally complete behind the foreground surface. We used amodal completion to show that changing the orientation of an image could generate clear differences in the depth order of adjacent surfaces. We found that different-colored contours would appear completed and circular when they were perceived to fall behind an apparent foreground surface region. Changing the image orientation not only switched apparent depth ordering but also caused alternate colored rings to appear amodally completed. This indicates that perceived surface occlusion is highly determined by delta shading relationships at mutual edge boundaries. 
Our findings support the view that assigning edge contours in images to appropriate physical causes in the environment depends on the orientation of shading patterns. Recent research has demonstrated the potential importance of such edge contours in driving tertiary judgments of surface shape and material properties (Egan, Todd, & Kallie, 2015; Marlow, Todorovic, & Anderson, 2015). The body of ongoing psychophysical work in the field of computational vision will help to not only understand visual processes in the experience of surfaces and materials, but also help formulate perceptual models of human experience in machine vision applications where monocular images are relied upon. 
Acknowledgments
JK was supported by an Australian Research Council (ARC) Future Fellowship (FT140100535). SA was supported by a grant from the UCSD Department of Psychology. Thanks to J. Lopez, J. Yuan, J. Xu, M. Lao, A. Lee, N. Dykmans, and S. Kaneko for their assistance. 
Commercial relationships: none. 
Corresponding author: Juno Kim. 
Email: juno.kim@unsw.edu.au. 
Address: School of Optometry and Vision Science, University of New South Wales, Kensington, New South Wales, Australia. 
References
Ben-Shahar O., Huggins P. S., Zucker S. W. (2002). On computing visual flows with boundaries: The case of shading and edges. In Biologically motivated computer vision ( pp. 189–198). Berlin Heidelberg: Springer-Verlag.
Ben-Shahar O., Zucker S. W. (2001). On the perceptual organization of texture and shading flows: From a geometrical model to coherence computation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1, 1048–1055.
Breton P., Zucker S. W. (1996). Shadows and shading flow fields. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 782–789.
Egan E., Todd J., Kallie C. (2015). The effects of smooth occlusions and directions of illumination on the visual perception of 3D shape from shading. Journal of Vision, 15 (12): 966, doi:10.1167/15.12.966. [Abstract]
Fleming R. W. Torralba A., Adelson E. H. (2004). Specular reflections and the perception of shape. Journal of Vision, 4 (9): 10, 798–820, doi:10.1167/4.9.10. [PubMed] [Article]
Ghose T., Palmer S. E. (2010). Extremal edges versus other principles of figure-ground organization. Journal of Vision, 10 (8): 3, 1–17, doi:10.1167/10.8.3. [PubMed] [Article]
Ghose T., Palmer S. E. (2016). Gradient cuts and extremal edges in relative depth and figure-ground perception. Attention, Perception & Psychophysics, 78 (2), 636–646.
Horn B. K., Sjoberg R. W. (1979). Calculating the reflectance map. Applied Optics, 18 (11), 1770–1779.
Kim J., Marlow P., Anderson B. (2011). The perception of gloss depends on highlight congruence with surface shading. Journal of Vision, 11(9): 4, 1–19, doi:10.1167/11.9.4. [PubMed] [Article]
Kim J., Marlow P., Anderson B. (2014). Texture-shading flow interactions and perceived reflectance. Journal of Vision, 14(7): 1, 1–19, doi:10.1167/14.7.1. [PubMed] [Article]
Koenderink J. J., van Doorn A. J. (1980). Photometric invariants related to solid shape. Optica Acta: International Journal of Optics, 27 (7), 981–996.
Langer M. S., Bülthoff H. H. (2000). Depth discrimination from shading under diffuse lighting. Perception, 29 (6), 649–660.
Luckiesh M. (1916). Light and shade and their applications. New York: van Nostrand Company.
Luckiesh M. (1922). Visual illusions: Their causes, characteristics and applications. New York: van Nostrand Company.
Mamassian P., Goutcher R. (2001). Prior knowledge on the illumination position. Cognition, 81, B1–B9.
Marlow P., Todorovic D., Anderson B. (2015). Coupled computations of three-dimensional shape and material. Current Biology, 25 (6), R221–R222.
Metzger W. (1975). Gesetze des Sehens. Frankfurt am Main, Germany: Verlag Waldemar Kramer.
Nakayama K., Shimojo S., Silverman G. H. (1989). Stereoscopic depth: Its relation to image segmentation, grouping and the recognition of occluded objects. Perception, 18, 55–68.
Palmer S. E., Ghose T. (2008). Extremal edges: A powerful cue to depth perception and figure-ground organization. Psychological Science, 19 (1), 77–84.
Peterson M. A., Salvagio E. (2008). Inhibitory competition in figure-ground perception: Context and convexity. Journal of Vision, 8 (16): 4, 1–13, doi:10.1167/8.16.4. [PubMed] [Article]
Ramachandran V. S. (1988). Perceiving shape from shading. Scientific American, 259 (2), 76–83.
Thompson P., May K., Stone R. (1993). Chromostereopsis: A multicomponent depth effect? Displays, 14 (4), 227–234.
Wagemans J., Elder J. H., Kubovy M., Palmer S. E., Peterson M. A., Singh M., von der Heydt R. (2012). A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure-ground organization. Psychological Bulletin, 138 (6), 1172–1217.
Figure 1
 
Image contours generated by the occluding boundary (A) and reflectance boundary (B) of a planar surface. Transformation on right shows the isophotes for the same image. Note how the orientation of isophotes is discontinuous across an occluding boundary, but mostly preserved across reflectance boundaries.
Figure 1
 
Image contours generated by the occluding boundary (A) and reflectance boundary (B) of a planar surface. Transformation on right shows the isophotes for the same image. Note how the orientation of isophotes is discontinuous across an occluding boundary, but mostly preserved across reflectance boundaries.
Figure 2
 
Method used to generate shading offset across reflectance boundaries. Pigmented regions (variations in reflectance) were imposed across an egg-shaped object using a binarized cloud-noise texture. The object was illuminated from different directions by a hemi-lamp in Blender (ranging clockwise in orientation between 0° and 180°). The separate light and dark regions in albedo (p and q, respectively) were isolated into separate images (exemplified here with p0 and q135) by subtracting the luminance of a binarized mask for each textural element. Individual stimulus images with different levels of delta shading were the combined isolated shading profiles for different illumination directions, as shown on the right with the 135° condition (δ135).
Figure 2
 
Method used to generate shading offset across reflectance boundaries. Pigmented regions (variations in reflectance) were imposed across an egg-shaped object using a binarized cloud-noise texture. The object was illuminated from different directions by a hemi-lamp in Blender (ranging clockwise in orientation between 0° and 180°). The separate light and dark regions in albedo (p and q, respectively) were isolated into separate images (exemplified here with p0 and q135) by subtracting the luminance of a binarized mask for each textural element. Individual stimulus images with different levels of delta shading were the combined isolated shading profiles for different illumination directions, as shown on the right with the 135° condition (δ135).
Figure 3
 
Stimulus images with shading offset used in Experiment 1. Upper row: Images generated by increasing the difference in shading angle, delta (δ), between surface regions flanking reflectance contours (δ = px + qy, where x = 0° and y takes on larger angular values up to 180°). These appear to separate out into two surfaces lying at different depths. Lower row: For comparison, the original surfaces illuminated with different oblique angles of illumination (values in degrees). All have zero delta shading, and all appear as solid closed surfaces.
Figure 3
 
Stimulus images with shading offset used in Experiment 1. Upper row: Images generated by increasing the difference in shading angle, delta (δ), between surface regions flanking reflectance contours (δ = px + qy, where x = 0° and y takes on larger angular values up to 180°). These appear to separate out into two surfaces lying at different depths. Lower row: For comparison, the original surfaces illuminated with different oblique angles of illumination (values in degrees). All have zero delta shading, and all appear as solid closed surfaces.
Figure 4
 
Perceived change in depth plotted as a function of delta shading angle. Mean probability estimates of perceived change in figure-ground depth order for surface images with increasing delta in shading angle shown in blue. Dashed lines outline individual observer probability estimates. Error bars are standard errors of the mean.
Figure 4
 
Perceived change in depth plotted as a function of delta shading angle. Mean probability estimates of perceived change in figure-ground depth order for surface images with increasing delta in shading angle shown in blue. Dashed lines outline individual observer probability estimates. Error bars are standard errors of the mean.
Figure 5
 
Upright and inverted versions of the stimulus images. (a) the original image with zero delta shading presented upright (left) and inverted (right). This is the same image as the zero delta shading condition in Experiment 1. (b) the image with maximum delta shading at 180°, also presented upright and inverted. This image is the same as the 180° delta shading condition in Experiment 1.
Figure 5
 
Upright and inverted versions of the stimulus images. (a) the original image with zero delta shading presented upright (left) and inverted (right). This is the same image as the zero delta shading condition in Experiment 1. (b) the image with maximum delta shading at 180°, also presented upright and inverted. This image is the same as the 180° delta shading condition in Experiment 1.
Figure 6
 
Stacked area plots show proportion of surface region perceived in depth. (A) Shaded regions show proportions of responses favoring p and q regions in upright images. (B) Responses favoring p and q regions in inverted images. The “neither” zone indicates proportion of images selected as containing no stepped variation in depth across image contours. Arrows indicate the direction of shading flow from light to dark (i.e., direction of lighting across the convex surface geometry). The largest proportion of responses to p and q zones consistently favors image regions with inverted lighting at higher levels of delta shading (upward pointing arrows).
Figure 6
 
Stacked area plots show proportion of surface region perceived in depth. (A) Shaded regions show proportions of responses favoring p and q regions in upright images. (B) Responses favoring p and q regions in inverted images. The “neither” zone indicates proportion of images selected as containing no stepped variation in depth across image contours. Arrows indicate the direction of shading flow from light to dark (i.e., direction of lighting across the convex surface geometry). The largest proportion of responses to p and q zones consistently favors image regions with inverted lighting at higher levels of delta shading (upward pointing arrows).
Figure 7
 
Method used to create the delta-shaded image of Experiment 3. Circles of different radii were initially drawn on images of spheres illuminated from above and below. Complementary cog-shaped regions were punched out by image subtraction, and then, alternately combined into the final image on the right.
Figure 7
 
Method used to create the delta-shaded image of Experiment 3. Circles of different radii were initially drawn on images of spheres illuminated from above and below. Complementary cog-shaped regions were punched out by image subtraction, and then, alternately combined into the final image on the right.
Figure 8
 
Layout of the two-alternative forced-choice task in Experiment 3. The same stimulus image differed by 180° in orientation across the screen. The observer's task was to select the side of the display containing the ring that was the same color as the target in the lower center and appeared as continuous behind finger-like surface regions.
Figure 8
 
Layout of the two-alternative forced-choice task in Experiment 3. The same stimulus image differed by 180° in orientation across the screen. The observer's task was to select the side of the display containing the ring that was the same color as the target in the lower center and appeared as continuous behind finger-like surface regions.
Figure 9
 
Bar plot of mean probabilities for experiencing contours amodally. Separate pairs show responses to images that were upright or inverted (see lower image insets). Blue bars indicate probability of perceiving blue contours as amodally completed rings. Red bars indicate probability of perceiving red contours as amodally completed rings. Note how preference for different-colored rings switches between changes in image orientation. Error bars are standard errors of the mean.
Figure 9
 
Bar plot of mean probabilities for experiencing contours amodally. Separate pairs show responses to images that were upright or inverted (see lower image insets). Blue bars indicate probability of perceiving blue contours as amodally completed rings. Red bars indicate probability of perceiving red contours as amodally completed rings. Note how preference for different-colored rings switches between changes in image orientation. Error bars are standard errors of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×