Free
Article  |   August 2014
Local edge statistics provide information regarding occlusion and nonocclusion edges in natural scenes
Author Affiliations
Journal of Vision August 2014, Vol.14, 13. doi:10.1167/14.9.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Kedarnath P. Vilankar, James R. Golden, Damon M. Chandler, David J. Field; Local edge statistics provide information regarding occlusion and nonocclusion edges in natural scenes. Journal of Vision 2014;14(9):13. doi: 10.1167/14.9.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Edges in natural scenes can result from a number of different causes. In this study, we investigated the statistical differences between edges arising from occlusions and nonocclusions (reflectance differences, surface change, and cast shadows). In the first experiment, edges in natural scenes were identified using the Canny edge detection algorithm. Observers then classified these edges as either an occlusion edge (one region of an image occluding another) or a nonocclusion edge. The nonocclusion edges were further subclassified as due to a reflectance difference, a surface change, or a cast shadow. We found that edges were equally likely to be classified as occlusion or nonocclusion edges. Of the nonocclusion edges, approximately 33% were classified as reflectance changes, 9% as cast shadows, and 58% as surface changes. We also analyzed local statistical properties like contrast, average edge profile, and slope of the edges. We found significant differences between the contrast values for each category. Based on the local contrast statistics, we developed a maximum likelihood classifier to label occlusion and nonocclusion edges. An 80%–20% cross validation demonstrated that the human classification could be predicted with 83% accuracy. Overall, our results suggest that for many edges in natural scenes, there exists local statistical information regarding the cause of the edge. We believe that this information can potentially be used by the early visual system to begin the process of segregating objects from their backgrounds.

Introduction
Our understanding of the statistical structure of natural scenes has yielded a number of important insights into the processing of information by the mammalian visual system (see, e.g., Atick, 1992; Field, 1987; Hyvärinen & Hoyer, 2001; Kersten, 1987; Simoncelli & Olshausen, 2001). The approach has focused on a range of statistical properties and has been applied to a variety of neural parameters (e.g., spatial and temporal selectivity, color, disparity). This work has helped develop functional interpretations of the receptive field properties of visual neurons as well as provided guidelines for building efficient coding techniques. 
Much of the early work on efficient visual coding focused on spatial statistics of natural scenes and the properties of receptive fields of the primary visual pathways. The Gabor-like nature of V1 receptive field profiles (see, e.g., Field & Tolhurst, 1986; Jones & Palmer, 1987) was argued to produce an optimally sparse representation of natural images (Field, 1987; Olshausen & Field, 1996). To a first approximation, natural scenes can be described as sparse collections of luminance discontinuities (i.e., edges), and receptive fields with Gabor-like profiles provide an efficient description of these edges. However, this line of work has largely drawn no distinction between the different possible causes of an edge. Despite a long history of studies on edge detection, only a handful of studies have focused on the statistics of different edge classes (Balboa & Grzywacz, 2000; DiMattina, Fox, & Lewicki, 2012; Elder, Beniaminov, & Pintilie, 1999; Fowlkes, Martin, & Malik, 2007; Ing, Wilson, & Geisler, 2010). 
In this article, we investigate the local statistics of edges in relation to the particular class of edge. As summarized in Figure 1, a discontinuity in luminance may result from a number of possible causes. First, one object or surface may occlude another. If the illuminations or reflectances of the two surfaces differ, there will be a luminance discontinuity, forming an occlusion edge. Of course, not all occlusions will create luminance discontinuities. The contrast between the two surfaces may be too low, or if there is a significant texture on the surface, the texture can mask any discontinuity across the surfaces. A second class of edge is the nonocclusion edge, which may result from several causes. Nonocclusion edges can arise from a reflectance change within a surface or from a shadow or illumination boundary; alternatively, a nonocclusion edge may be caused by a change in the surface orientation with respect to the illuminant (e.g., a crease or fold). The discontinuity in luminance may also result from a combination of these effects. In our experiment, we designed a triangular slider (see the Methods section for details) that allows participants to make a soft categorization of edges which have multiple causes. 
Figure 1
 
Examples of occlusion and nonocclusion edges. The first image is a modified version of Adelson's checkerboard illusion (web.mit.edu/persci/people/adelson/checkershadow_illusion.html).
Figure 1
 
Examples of occlusion and nonocclusion edges. The first image is a modified version of Adelson's checkerboard illusion (web.mit.edu/persci/people/adelson/checkershadow_illusion.html).
Those studies that have investigated differences in edge types have largely focused on occlusions as identified by human observers (e.g., Balboa & Grzywacz, 2000; DiMattina et al., 2012; Hoiem, Efros, & Hebert, 2011). Some of these studies used the Berkeley Segmentation Database (Martin, Fowlkes, Tal, & Malik, 2001), while others used semitrained observers under more controlled conditions. Unfortunately, there is no purely objective method of identifying a particular edge type in a scene. There are edge detection techniques (e.g., Canny, 1986) which will identify an edge according to a specific set of criteria, but such edge detectors can identify only that a luminance discontinuity is present; they cannot determine whether an edge is due to an occlusion between two objects or to some change within an object (nonocclusion). 
Alternatively, several studies have explored the relation between depth discontinuities and luminance discontinuities using laser range finders (Howe & Purves, 2002; Huang, Lee, & Mumford, 2000; Liu, Cormack, & Bovik, 2011; Potetz & Lee, 2003, 2005; Yang & Purves, 2003a, 2003b). These studies have provided important insights into the relationships between depth discontinuities and luminance discontinuities. The data from these techniques (e.g., lidar) could certainly be useful for some of the labeling described here. However, laser range finders also have limitations and biases. They may miss occlusion edges if neighboring surfaces have insufficient depth differences, when a human observer might easily identify the occlusion. It would also be difficult to use the information from lidar to classify nonocclusion edges. If the intensity of the reflected beam is used (i.e., the intensity map), it is possible to gain some insight into the albedo of the surfaces. In theory, this could be used to help differentiate between shadows and reflectance differences. However, we are not aware of studies that have made such an effort, and it is not clear how the errors in these estimates might affect classification. 
Typically, inferring the cause of an edge requires some understanding of the 3-D structure of the image. In many cases, the local information alone is not enough to identify the presence of an occlusion edge. Given a large-scale scene, there could be very good agreement across observers regarding the presence of an edge. However, given only the local patch containing an edge, judgments become unreliable, often with no edge apparent to the observers. This is especially true with junctions. McDermott (2004) tested whether human observers perceived junctions in image patches of various sizes using junctions identified at the full scale of the image as ground truth. Participants were roughly at chance for a 13-pixel-diameter patch around the junction, whereas more than 90% of patches were correctly classified with a 201-pixel-diameter patch. 
A number of studies have attempted to extract significant edges by including regional support for an edge (Canny, 1986; Elder, 1999; Leclerc & Zucker, 1987; Martin, Fowlkes, & Malik, 2004; Shashua & Ullman, 1990; Zhou & Mel, 2008). One goal of such studies is to develop algorithms that perform figure/ground assignment (Fowlkes et al., 2007; Heitger, von der Heydt, & Kubler, 1994; Hoiem et al., 2011; Vecera, Vogel, & Woodman, 2002). These studies have created techniques which take advantage of the smooth structure of natural edges to integrate the contours using long-range information (see, e.g., Elder, 1999; Geisler, Perry, Super, & Gallogly, 2001; Li & Gilbert, 2002). It has been argued that the human visual system performs a form of contour integration similar to these algorithms (see, e.g., Field, Hayes, & Hess, 1993; Geisler et al., 2001). However, these studies have not explored the statistics of different edge types, which may prove useful both for locating significant edges and for combining these edges into veridical contours and figures. 
DiMattina et al. (2012) investigated the difference between image patches that contained hand-labeled occlusion edges and image patches that were labeled as within-surface boundaries. They measured human performance for discriminating the patches compared to various computational techniques for discrimination. As with previous studies, they report that accuracy improved with increasing patch size. A simple linear classifier applied to local edge detectors (a Gabor filter bank) did not perform as well as human observers. However, a neural network combining information across location and scale compared well to the observers. Their study discriminated between occlusion edge patches and surface patches, which may or may not contain any edge. In contrast, the present study focuses on differences in the statistics of different classes of edge patches. 
In this study, we used an objective algorithm (Canny) to define edges in a scene and had human observers classify the edge type as an occlusion or nonocclusion edge and subclassify nonocclusion edges as shadow boundaries, reflectance edges, or surface change edges. We focus on the statistical differences between occlusion and nonocclusion edges. We also want to emphasize that our methods may be subject to a number of potential biases. Classification by human observers will certainly have limitations. These limitations and biases are further discussed in the Methods and Discussion sections of this article. 
In addition to edges detected by the Canny algorithm, we investigated the statistical properties of hand-labeled occlusion edges. Our goal was to determine whether there are local statistical differences between these different classes of edges and to determine whether the local statistics of an edge could provide any information regarding the class of edge. 
Methods
This section describes the experimental procedures used to obtain the categorization of edges as occlusions or nonocclusions. Participants also classified nonocclusion edges further into three subcategories. The categories of edges were as follows: 
  1.  
    Occlusion: formed when an object partially occludes another object
  2.  
    Nonocclusion
  3. (a)  
     Reflectance change: formed when there is a change in reflectance due to surface properties
  4. (b)  
     Surface change: formed when there is a physical angle change on an object's surface
  5. (c)  
     Cast shadow: formed when an object casts its shadow on another object
Figure 1 shows examples of the occlusion category and three subcategories of nonocclusion edges. 
Apparatus
Stimuli were displayed on an HP LP2465 widescreen LCD monitor. The screen size of the monitor was 52.0 × 32.6 cm (width × height), with a display resolution of 41 pixels/cm and a frame rate of 60 Hz. The display had a minimum, maximum, and mean luminance of 0.38, 350, and 76.5 cd/m2, respectively, and an overall gamma of 2.2. Stimuli were viewed binocularly through natural pupils in a darkened room at a distance of approximately 60 cm. 
Stimuli
Stimuli were generated from 38 high-resolution (2560 × 1920) natural images from the McGill Color Image Database (Olmos & Kingdom, 2004). The images were selected from seven of the nine categories of the McGill Color Image Database: Flowers, Animals, Foliage, Fruits, Landscapes, Winter, and Shadows. No images were selected from the Textures and Man-made categories. The selected images were typically dominated by a small number of objects (as shown in Figure 2), which made the process of hand tracing the edges of objects more straightforward (see Human-labeled occlusion edges). However, we also recognize that this selection may produce some biases in our data (see Discussion). The images were displayed in gray scale with 8-bit resolution and pixel values from 0 through 255. For each image, edges were located using Matlab's Canny edge detection algorithm. The standard deviation of the Gaussian filter used by the Canny algorithm was set to 10 pixels. The low and high thresholds were automatically selected by the Canny algorithm for each image. 
Figure 2
 
Images from the McGill Color Image Database used in the study.
Figure 2
 
Images from the McGill Color Image Database used in the study.
A set of 1,000 edges found by the Canny algorithm was selected randomly from the 38 natural images. The selected edge locations were uniformly distributed over the image area. No two selected edge locations in an image were within a distance of 80 pixels of each other. 
Using the selected edges, 1,000 image stimuli were generated. To generate each stimulus, a red bounding box was placed around a selected edge in an image. The 100 × 100-pixel bounding box subtended a visual angle of approximately 2.4°. The entire stimulus subtended a visual angle of 36.6°. Along with the red bounding box, the edge line from the Canny algorithm was also placed on top of the actual edge in red. Figure 3 shows the graphical user interface with the stimuli. 
Figure 3
 
The graphical user interface for the experiment. The image on top shows the interface used to make a decision between occlusion and nonocclusion categories using the horizontal slider. The image below shows the interface with the triangular slider used for the subcategorization of nonocclusion edges after the participant has rated the edge to be in the nonocclusion category with a confidence of 75% or higher.
Figure 3
 
The graphical user interface for the experiment. The image on top shows the interface used to make a decision between occlusion and nonocclusion categories using the horizontal slider. The image below shows the interface with the triangular slider used for the subcategorization of nonocclusion edges after the participant has rated the edge to be in the nonocclusion category with a confidence of 75% or higher.
Participants
Six graduate student volunteers (mean age = 27 years) from the Computational Perception and Image Quality Laboratory at Oklahoma State University took part in the experiment. The experiment was approved by the Institutional Review Board of Oklahoma State University. The participants were given 2 weeks to complete the experiment in five sessions. The sixth participant completed four of the sessions in a single day, and his results contained many outliers when compared with the other participants. As a result, his responses were excluded from the analysis. 
Procedure
Each subject categorized the edge type of all 1,000 stimuli with a slider in the user interface. At the onset of a stimulus, a red bounding box with a crosshair and a red line over the target edge flashed alternately on and off in 1-s intervals until a response was made by the subject. Participants observed the selected edge with and without the red bounding box and responded as to whether the displayed edge was an occlusion edge or a nonocclusion edge using a slider. The extreme left of the slider represented 100% confidence that the displayed edge was an occlusion edge, whereas the extreme right represented 100% confidence that the displayed edge was a nonocclusion edge. Figure 3 shows the user interface for the experiment. The top image shows the interface with the horizontal slider. 
Edges rated to be nonocclusion edges with a confidence of 75% or higher were further divided into three subcategories. In order to make a nonocclusion categorization, participants were shown a new triangular slider on the screen after they made the response of occlusion or nonocclusion. The bottom image of Figure 3 shows the user interface with the triangular selector and a red crosshair within that triangle. The three vertices of the triangle represented unambiguous judgments (100%) of the nonocclusion subcategories. Participants made their judgment of the nonocclusion subcategory by placing the red crosshair at an appropriate position in the triangle. For example, when the red crosshair was placed near the vertex representing a reflectance change edge, the user was indicating a very high confidence that the highlighted edge was a reflectance change. If the red crosshair was placed midway between any two vertices of the triangle, the subject judged the edge to have equivalent properties of the two edge categories represented by the two vertices. Similarly, if the red crosshair was placed at the center of the triangle, the subject judged the edge to have properties from all nonocclusion subcategories. 
Potential biases
The methodology we introduce here has several potential sources of bias. Here, we wish to emphasize three potential biases. However, it is also worth noting that any technique that attempts to deduce the underlying cause of an edge will suffer from some bias, as there is no objective measure of ground truth. In our methods, we used human observers and a Canny edge detection algorithm to detect and classify edges. These are imperfect methods. The use of human observers certainly introduces potential biases. The instructions to observers, the choice of images, and the choice of parameters in the Canny operator are all likely to have some effect on the statistics described here. Although we believe we have selected a reasonable set of parameters, it will not be clear what effects they have until a large variety of studies are performed that explore the space of parameters. The three choices we wish to emphasize are: 
  1.  
    The choice of images. We selected images from the McGill database that had well-defined objects with reasonably well-defined boundaries. A much larger database of images needs to be explored. Images like those of the van Hateren image set (van Hateren & van der Schaaf, 1998), for example, contain many scenes where edges are quite difficult to label (e.g., grass, leaves) or many of the edges are from objects that approach the sizes of the pixels. We are currently exploring how the image set affects these statistics.
  2.  
    The settings of the Canny edge detection algorithm. For example, we chose a particular scale for most of these studies (a 10-pixel scale Gaussian filter). We have repeated a portion of these studies with a larger scale (a 20-pixel scale Gaussian filter) and obtained largely similar results. However, there is large space of parameters that could be explored, and we cannot be confident that a different choice of parameters will not alter these results. We do believe, however, that the parameters we chose represent a reasonable first attempt.
  3.  
    The procedures and observers. Our method of classifying edges began with the classification of occlusion versus nonocclusion and then proceeded to a three-way classification of nonocclusion edges. Although we found that this approach was reasonable, it is not clear how different procedures might alter these results. It should be noted that Elder et al. (1999), in an unpublished study, produced largely similar classification results with different procedures.
Results: Occlusion edges in natural scenes
Across participants, approximately 50% of the edges were classified as occlusion edges and 50% of the edges were classified as nonocclusion edges. If a participant rated an edge as an occlusion edge with 50% (or higher) confidence, then that edge was classified as an occlusion edge for this measure. Individually, Participants 1 to 5 identified 49%, 50%, 52%, 51%, and 51% of edges as occlusion edges, respectively. Overall, 50.6% of the edge stimuli were classified as occlusion edges. In addition, the results for the experiment repeated with a larger scale (20-pixel) Canny operator yielded very similar proportions of occlusion edges: Overall, 49% of the edge stimuli were classified as occlusion edges. 
We also examined the degree of mutual agreement between the five participants. Table 1 shows the proportion of occlusion edges and nonocclusion edges with between-participant agreements of 100% (five out of five participants), 80% (four out of five), and 60% (three out of five). The fact that the proportions are similar across different degrees of mutual agreement indicates that, for edges found by the Canny edge detector, occlusion edges occur as frequently as nonocclusion edges. The fourth column in the table shows the proportion of the edges which did not satisfy the minimum mutual agreement criteria. This column indicates that for 100% between-participant agreement, classifications for 13% of the edge stimuli did not meet the criterion (i.e., agree across all five participants). For 80% between-participant agreement, only 5% of the edge stimuli did not meet the criterion. 
Table 1
 
The proportion of occlusion and nonocclusion edges at various degrees of mutual agreement between participants. The fourth column shows the proportion of edges which did not satisfy the between-participant agreement criteria.
Table 1
 
The proportion of occlusion and nonocclusion edges at various degrees of mutual agreement between participants. The fourth column shows the proportion of edges which did not satisfy the between-participant agreement criteria.
Mutual agreement Occlusion edge Nonocclusion edge Edges not satisfying the criterion
100% (5 out of 5) 44% 43% 13%
80% (4 out of 5) 48% 47% 5%
60% (3 out of 5) 50% 50% 0%
Analysis of occlusion versus nonocclusion edges
Next, we examined the local statistical properties of edges based on their classification as occlusions or nonocclusions. Images were analyzed using linear luminance values. (We also analyzed edges from log-luminance images and found similar results.) 
To analyze the statistics of the edges, we selected only those occlusion and nonocclusion edges which had at least 80% mutual agreement (four out of five participants agreed on the category). Out of 1,000 edges, 946 had 80% between-participant mutual agreement. However, we selected only 673 edges (330 occlusion edges and 343 nonocclusion edges) for the edge patch extraction. The remaining 273 edge patches (145 occlusion edges and 128 nonocclusion edges) were not selected because those patches had more than one edge within the patch. The selected edges were then extracted into small patches of 81 × 41 pixels. These patches were aligned using the Radon transform such that the edge line was oriented horizontally and located at the center of the patch at the 41st pixel row. The patches were additionally oriented such that the higher luminance half of the patch was always placed on top and the lower luminance half of the patch was placed on bottom. Figure 4a shows an illustration of an extracted patch, which has higher and lower luminance areas separated by an edge. All the extracted edge patches are freely available from our online database (http://redwood.psych.cornell.edu/edges). 
Figure 4
 
(a) An 81 × 41-pixel extracted edge patch. The patch is aligned such that the higher luminance side is on top and the lower luminance side is on the bottom. The edge line between the two sides is at the 41st pixel row. (b) A sample of the extracted occlusion edges. (c) A sample of the extracted nonocclusion edges. Both sets of extracted edges were first identified using the Canny edge operator and then classified by human observers.
Figure 4
 
(a) An 81 × 41-pixel extracted edge patch. The patch is aligned such that the higher luminance side is on top and the lower luminance side is on the bottom. The edge line between the two sides is at the 41st pixel row. (b) A sample of the extracted occlusion edges. (c) A sample of the extracted nonocclusion edges. Both sets of extracted edges were first identified using the Canny edge operator and then classified by human observers.
Figure 4b shows a set of extracted occlusion edges, and Figure 4c shows a set of extracted nonocclusion edges. Figure 5 show some of the patches not selected for the statistical analysis of occlusion and nonocclusion edges. 
Figure 5
 
A set of edge patches not selected for the statistical analysis of occlusion and nonocclusion edges. These patches have multiple edges in the extracted patch.
Figure 5
 
A set of edge patches not selected for the statistical analysis of occlusion and nonocclusion edges. These patches have multiple edges in the extracted patch.
The contrast distribution of occlusion versus nonocclusion edges
We measured the distribution of contrasts for edges classified as occlusions and nonocclusions with both Michelson contrast and root-mean-square (RMS) contrast. Michelson contrast measures the contrast between the two sides of the occlusion edges (occluding side and occluded side), whereas RMS contrast measures the contrast over the entire edge patch. The Michelson contrast for an edge patch was calculated as follows:  where Ltop is the mean luminance of the higher luminance (top) section and Lbottom is the mean luminance of the lower luminance (bottom) section of the edge patch. To compute Ltop and Lbottom, the 81 × 41 edge patch was divided into three sections. The sizes of the top, middle, and bottom sections were 30 × 41, 21 × 41, and 30 × 41, respectively. Ltop and Lbottom were computed using the top and the bottom sections, respectively:   where ep(x, y) denotes the luminance value at pixel location (x, y) in the edge patch. The middle section which included the edge line was excluded from the Michelson contrast computation to reduce the effects of edge blur and edge curvature. 
RMS contrast was measured as the ratio of the standard deviation to the mean luminance of the edge patch:  where Display FormulaImage not available denotes the mean luminance of the patch and x and y denote pixel coordinates. 
Figure 6a through d shows the histograms for the contrast of occlusion and nonocclusion edge patches. Figure 6a and b shows the Michelson contrast histograms, and Figure 6c and d shows the RMS contrast histograms. The horizontal axis of each histogram specifies the contrast of an edge patch, and the vertical axis specifies the number of edges in that contrast range. These distributions reveal that for those edges found by the Canny algorithm, the occlusion edges have a relatively high contrast compared to the nonocclusion edges. Similar distinctions were observed between the log-luminance occlusion and nonocclusion edge patches (not shown). 
Figure 6
 
The histograms of Michelson contrast and root-mean-square (RMS) contrast for occlusion and nonocclusion edges. (a–b) Histograms of Michelson contrast in occlusion and nonocclusion edge patches. (c–d) Histograms of RMS contrast in occlusion and nonocclusion edge patches. (e–f) Empirical cumulative distribution functions (CDFs) of Michelson contrast and RMS contrast in occlusion and nonocclusion edge patches. The blue curve shows the CDF of contrast in occlusion edges, and the red curve shows the CDF of contrast in nonocclusion edges.
Figure 6
 
The histograms of Michelson contrast and root-mean-square (RMS) contrast for occlusion and nonocclusion edges. (a–b) Histograms of Michelson contrast in occlusion and nonocclusion edge patches. (c–d) Histograms of RMS contrast in occlusion and nonocclusion edge patches. (e–f) Empirical cumulative distribution functions (CDFs) of Michelson contrast and RMS contrast in occlusion and nonocclusion edge patches. The blue curve shows the CDF of contrast in occlusion edges, and the red curve shows the CDF of contrast in nonocclusion edges.
Figure 6e and f shows the empirical cumulative distributive functions (CDFs) for Michelson and RMS contrasts of occlusion and nonocclusion edges. Figure 6e shows the CDF for the Michelson contrast of edges, and Figure 6f shows the CDF for the RMS contrast. The horizontal axis represents the contrast of edges, and the vertical axis represents the proportion of edges with a contrast below that indicated on the horizontal axis. As can be seen from the CDF of the Michelson contrast, 75% of the occlusion edges have contrast values more than 0.42, and 75% of the nonocclusion edges have contrast values less than 0.23. Similarly, from the CDF of the RMS contrast, 75% of occlusion edges have contrast values more than 0.41, and 75% of nonocclusion edges have contrast values less than 0.25. For the log-luminance edge patches (not shown), 75% of the occlusion edges had Michelson contrast values more than 0.075, and 75% of the nonocclusion edges had contrast values less than 0.035; for RMS contrast, 75% of occlusion edges had contrast values more than 0.071, and 75% of nonocclusion edges had contrast values less than 0.037. These results indicate that Michelson contrast or RMS contrast can be used as a strong cue in predicting whether an edge located by the Canny algorithm is an occlusion or nonocclusion edge. 
The average occlusion and nonocclusion edges
We computed the average normalized occlusion edge and nonocclusion edge. First, each edge patch was normalized such that it spanned the range from 0 to 1. The average occlusion and nonocclusion edge patches were then computed as follows:  where μ(x, y) is the average luminance at pixel location (x, y), epi denotes the ith extracted patch, N denotes the total number of the extracted edge patches, and x and y denote the pixel coordinates. 
The two-dimensional average edge patch was then converted to a one-dimensional average edge profile by averaging across each row in the two-dimensional patch. Figure 7a and b shows the one-dimensional average occlusion and nonocclusion edges, respectively. These data show that the average occlusion edge has a sharper transition from low luminance to high luminance than the average nonocclusion edge. Also shown are a sample of 20 randomly selected occlusion or nonocclusion edges, plotted in black. The sample occlusion edges clearly have a steeper transition than the sample nonocclusion edges. The average normalized log-luminance occlusion and nonocclusion edges (not shown) yielded similar results. 
Figure 7
 
One-dimensional profiles of normalized average occlusion and nonocclusion edges. (a) The normalized average occlusion edge in blue with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001.
Figure 7
 
One-dimensional profiles of normalized average occlusion and nonocclusion edges. (a) The normalized average occlusion edge in blue with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001.
We also compared the slopes of luminance transition for occlusion and nonocclusion edges. For each extracted two-dimensional edge patch, a slope was computed by first converting the edge patch to a one-dimensional edge profile with a length of 81 pixels. Then the slope of each one-dimensional edge profile was computed as a mean change in the luminance from the 36th pixel to the 46th pixel. The slope of transition of occlusion edges was significantly higher, t(671) = 16.08, p < 0.0001, than the slope for nonocclusion edges. The average nonocclusion edge appears to be nonmonotonic, with the greatest contrast difference near the center of the edge. We will return to this point later in describing the average edges of nonocclusion subcategories. 
Mean luminance versus contrast
Here we investigate the relationship between contrast and mean luminance of the edge patches. Mante, Frazor, Bonin, Geisler, and Carandini (2005) found that contrast and luminance were statistically independent. The image patches they analyzed were extracted from a simulated saccadic inspection of natural scenes. Their patches did not necessarily include an edge. To determine whether our edge patches also showed this independence, we analyzed the relationship between contrast and mean luminance for our extracted edge patches. Michelson contrast was computed as shown in Equation 1 and RMS contrast was computed as shown in Equation 3. Mean luminance was computed as follows:  where Ltop is the mean luminance of the higher luminance top section of the patch and Lbottom is the mean luminance of the lower luminance bottom section of the patch. 
Figure 8a and b shows the scatter plots of mean luminance versus Michelson contrast and mean luminance versus RMS contrast of edge patches, respectively. The blue circles represent points from the occlusion edge patches, and the red circles represent points from the nonocclusion edge patches. As can be seen from Figure 8a and b, there is no significant correlation between the contrast and the mean luminance, except the correlation between the Michelson contrast and mean luminance for nonocclusion edges, which is weak but significant, r(341) = −0.13, p = 0.02. These results are consistent with the findings of Mante et al. (2005) and suggest that mean luminance and contrast are largely independent for both occlusion and nonocclusion edges. One should note that the measure of RMS contrast used here is a normalized RMS contrast, where the RMS contrast was normalized by dividing by the mean luminance of the edge patch (Equation 3). One would expect to have a strong correlation between the nonnormalized RMS contrast and the mean luminance, as there would be more variation in luminance in the edge patches with higher mean luminance compared to those with lower mean luminance. Indeed, we found strong correlations (r > 0.85) between the nonnormalized RMS contrast and the mean luminance for occlusion and nonocclusion edges. 
Figure 8
 
Scatter plots of the mean luminance versus contrast of occlusion and nonocclusion edge patches. (a) Scatter plot of mean luminance versus Michelson contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.13, p = 0.02. (b) Scatter plot of mean luminance versus root-mean-square contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.10, p = 0.06. The blue open circles represent occlusion edge patches, and the red open circles represent nonocclusion edge patches.
Figure 8
 
Scatter plots of the mean luminance versus contrast of occlusion and nonocclusion edge patches. (a) Scatter plot of mean luminance versus Michelson contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.13, p = 0.02. (b) Scatter plot of mean luminance versus root-mean-square contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.10, p = 0.06. The blue open circles represent occlusion edge patches, and the red open circles represent nonocclusion edge patches.
Analysis of nonocclusion edge subcategories
All of the edges labeled as nonocclusion edges with a slider rating of more than 75% were sorted into three subcategories: reflectance change (RC), cast shadow (CS), and surface change (SC). As mentioned previously, the subcategory rating for each nonocclusion edge was made using a triangular slider, where the vertices represented the three subcategories. The subcategory rating was registered by placing the crosshair in the triangle at the appropriate position. Figure 9 shows the density maps of crosshair placement in the triangular slider for each participant. Qualitatively, the density maps indicate that most of the nonocclusion edges in the natural scenes used in this study were judged to be due to reflectance changes and surface changes; most of the slider positions lie on the line segment between the vertices corresponding to RC and SC. 
Figure 9
 
The density of the triangular slider placement for each partipant, and the overall average density of placement of the slider from the mean positions of the slider across participants for each edge. The overall density is not the sum of slider position of all participants. Each point corresponding to an edge in the overall density map is the result of averaging the position slider for that edge across all participants (vertices: Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]).
Figure 9
 
The density of the triangular slider placement for each partipant, and the overall average density of placement of the slider from the mean positions of the slider across participants for each edge. The overall density is not the sum of slider position of all participants. Each point corresponding to an edge in the overall density map is the result of averaging the position slider for that edge across all participants (vertices: Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]).
Proportion of nonocclusion subcategories
Figure 10 shows the relative proportions of the three subcategories of nonocclusion edges for each participant. The triangular slider was divided into three regions, as shown in the figure. The three regions of the triangle represent the subcategories of nonocclusion edges. The upper region represents a cast shadow (CS) edge. If the slider crosshair was placed in this region, then that edge was considered as a cast shadow edge. Similarly, the lower left is the region representing a reflectance change (RC) edge and the lower right region represents a surface change (SC) edge. The relative proportions of subcategories of nonocclusion edges per participant were significantly different, F(2, 12) = 12.11, p = 0.0013. The Tukey HSD post hoc test indicated that cast shadow edges (M = 40, SD = 8) occur significantly less frequently than surface change edges (M = 269, SD = 102). However, the relative proportion of reflectance change edges (M = 151, SD = 76) was not significantly different from those of cast shadow or surface change edges. Here M and SD represent mean relative proportion and standard deviation. Additionally, most of the nonocclusion edges were classified as reflectance changes and surface changes. However, the ratings were not consistent across participants. Participants 1 and 2 categorized nonocclusion edges predominantly due to reflectance changes, while Participants 3, 4, and 5 categorized nonocclusion edges as mostly due to surface changes. Overall, approximately 31% of edges were due to reflectance changes, 8% were due to cast shadows, and 56% were due to surface changes. In addition, the results for the experiment repeated with the larger scale (20-pixel) Canny operator yielded very similar proportions of nonocclusion edges: Overall, approximately 25% of edges were due to reflectance changes, 9% were due to cast shadows, and 66% were due to surface changes. 
Figure 10
 
(a) The triangular slider for subcategorization was divided into three regions representing the three subcategories (Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]) of nonocclusion edges. (b) The proportions of subcategories of nonocclusion edges for each participant and overall mean proportions. Overall, approximately 31% of edges were due to reflectance changes, 8% were due to cast shadows, and 56% were due to surface changes.
Figure 10
 
(a) The triangular slider for subcategorization was divided into three regions representing the three subcategories (Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]) of nonocclusion edges. (b) The proportions of subcategories of nonocclusion edges for each participant and overall mean proportions. Overall, approximately 31% of edges were due to reflectance changes, 8% were due to cast shadows, and 56% were due to surface changes.
We also examined the degree of mutual agreement among the five participants. Figure 11a shows the absolute proportions of the three categories of nonocclusion edges at 100%, 80%, and 60% between-participant mutual agreement. That is, it shows agreement between five out of five participants, four out of five participants, and three out of five participants, respectively. Figure 12 shows a sample of edges in each subcategory with a red bounding box around the edges. All of the edges shown in the top three rows corresponding to the three subcategories have at least 80% between-participant agreement; the edges in the bottom row are the indeterminate edges which did not meet the 80% between-participant agreement. Overall, the cast shadow edges occur rarely as compared to the other two categories, while surface change edges occur most frequently. However, there is a large variation in the proportion of surface change edges at different between-participant mutual agreements. With 60% mutual agreement, the proportion of surface change edges is 70%; with 100% mutual agreement, the proportion of surface change edges is only 8%. This suggests that high disagreement on edges classified as surface changes led to many edge patches that did not meet the 80% mutual agreement criterion. Similar variations can be seen for the reflectance change edge proportions. Figure 11b shows these variations in the regions of the triangle. The three ellipses in the figure represent the variations between participants. The major axis and the minor axis of the ellipses represent the standard deviation in the horizontal and the vertical directions, respectively. The asterisk at the center of each ellipse represents the overall mean of the triangular slider placements of the five participants in each region, and the circular dots coded with different colors show the density of the mean slider placement. 
Figure 11
 
(a) The proportions of subcategories of nonocclusion edges at different degrees of mutual agreement between participants. (b) The three ellipses show the variations in horizontal and vertical directions in each region, corresponding to the three subcategories of nonocclusion edges. The three asterisks represent the mean placement of the slider for all edges in each region. The colored circles represent the mean placement of the slider for each edge.
Figure 11
 
(a) The proportions of subcategories of nonocclusion edges at different degrees of mutual agreement between participants. (b) The three ellipses show the variations in horizontal and vertical directions in each region, corresponding to the three subcategories of nonocclusion edges. The three asterisks represent the mean placement of the slider for all edges in each region. The colored circles represent the mean placement of the slider for each edge.
Figure 12
 
Samples of occlusion and nonocclusion edges categorized with at least 80% between-participant mutual agreement. Each edge shown here is bounded by a red box. The first row shows edges categorized as occlusion edges. The next three rows correspond to edges categorized as reflectance changes, cast shadows, and surface changes, and the bottom row shows the indeterminate edges which did not meet 80% between-participant mutual agreement.
Figure 12
 
Samples of occlusion and nonocclusion edges categorized with at least 80% between-participant mutual agreement. Each edge shown here is bounded by a red box. The first row shows edges categorized as occlusion edges. The next three rows correspond to edges categorized as reflectance changes, cast shadows, and surface changes, and the bottom row shows the indeterminate edges which did not meet 80% between-participant mutual agreement.
Local statistics of nonocclusion edges
The average Michelson contrasts of nonocclusion edges in each category at different degrees of mutual agreement are shown in Figure 13. We found significant differences between the contrast of edges in each category, F(2, 193) = 99.92, p < 0.0001. A post hoc comparison using the Tukey HSD test also revealed a significant difference in contrast between each category. The cast shadow edges exhibit the highest Michelson contrast (M = 0.616, SD = 0.19), and the surface change edges exhibit the lowest Michelson contrast (M = 0.142, SD = 0.113) among all nonocclusion edge subcategories. Here M and SD represent the mean and standard deviation of contrast. 
Figure 13
 
The mean contrast of each subcategory at different degrees of between-participant mutual agreement. The error bars represent the mean standard deviations of contrast in each category. The contrast difference between each category was statistically significant, F(2, 193) = 99.92, p < 0.0001.
Figure 13
 
The mean contrast of each subcategory at different degrees of between-participant mutual agreement. The error bars represent the mean standard deviations of contrast in each category. The contrast difference between each category was statistically significant, F(2, 193) = 99.92, p < 0.0001.
Figure 14a through c shows the normalized average edge for each subcategory of nonocclusion edges. These data indicate that the normalized average cast shadow edge and surface change edge have sharper transitions from lower luminance to higher luminance. We found significant differences in the slopes of nonocclusion subcategories, F(2, 193) = 8.15, p = 0.0004. Post hoc comparisons using the Tukey HSD test indicated that mean slope of the cast shadow edges (M = −0.042, SD = 0.023) was significantly different from the mean slopes of reflectance change edges (M = −0.022, SD = 0.01) and surface change edges (M = −0.026, SD = 0.015). However, the mean slopes of reflectance change and surface change edges were not significantly different. Here M and SD represent the mean and standard deviation of the slope of normalized average edges. 
Figure 14
 
One-dimensional profiles of normalized average nonocclusion edge subcategories. (a) The normalized average reflectance change (RC) nonocclusion edge in red, with 20 sample occlusion edges. (b) The normalized average cast shadow (CS) nonocclusion edge in red, with 13 sample occlusion edges. (c) The normalized average surface change (SC) nonocclusion edge in red, with 20 sample occlusion edges. The slope of CS edges was significantly different from those of RC (p < 0.01) and SC (p < 0.01) edges.
Figure 14
 
One-dimensional profiles of normalized average nonocclusion edge subcategories. (a) The normalized average reflectance change (RC) nonocclusion edge in red, with 20 sample occlusion edges. (b) The normalized average cast shadow (CS) nonocclusion edge in red, with 13 sample occlusion edges. (c) The normalized average surface change (SC) nonocclusion edge in red, with 20 sample occlusion edges. The slope of CS edges was significantly different from those of RC (p < 0.01) and SC (p < 0.01) edges.
As we noted earlier, the average nonocclusion edge shows a nonmonotonic luminance profile. Figure 14c shows that this result is primarily due to the surface change subcategory. Further analysis is required, but we speculate that this profile results from a peak in the reflection at the center of the fold in the surface. This may be due to a very local change in the shape of the surface. As distance from the local surface change increases, the reflected intensity returns to the mean intensity of the surface. 
Human-labeled occlusion edges
It is important to note that the results presented in previous sections may have an intrinsic bias, as they were based on the edges found by the Canny detection algorithm. The Canny algorithm determines that an edge is present when there is a luminance difference above a certain threshold (Canny, 1986). In order for an edge to be classified as an occlusion edge in the experiment, it must first be detected by the Canny algorithm. Based on the results of DiMattina et al. (2012), it is likely that many occlusion boundaries easily identified by participants will be missed by the Canny operator. This can be true for both texture edges and low contrast edges that have long-range support (that is, they can be inferred by integrating along the contour). To construct a broader account of occlusion, we asked participants to identify the locations of occlusion edges by tracing these edges on our collection of images. 
Edge tracings
Three participants were asked to trace the occlusion edges in 38 natural-scene images from the McGill Color Image Database. The images were displayed in their original color versions. Participants' instructions were as follows: “In the displayed image you should only trace the edges which occur when an object occludes another object. Only trace the edges which are formed by the main objects of the images. Ignore the occlusion edges formed by small objects such as: grass, leaves, small flowers, etc.” Participants were shown examples of tracings done earlier by the first author of this paper. Participants used the Adobe Photoshop Brush tool controlled by a mouse for tracing on color versions of the natural images. The Brush tool was set to a diameter of 9 pixels and the color red. The edges were traced on a separate Adobe Photoshop layer and overlaid on the top of the image layer. The left side of Figure 15 shows an image displayed to a participant for the tracing of the occlusion edges, and the right side shows the resulting tracing in red. 
Figure 15
 
(a) Original high-resolution image and (b) occlusion edge tracing in red from a participant.
Figure 15
 
(a) Original high-resolution image and (b) occlusion edge tracing in red from a participant.
Using the occlusion edge traces from the participants, we extracted 81 × 41-pixel edge patches. These patches were oriented using the same procedure as the edge patches extracted using the Canny algorithm. Figure 16 shows samples of the occlusion edge patches extracted using the hand-traced data. All the extracted edge patches are freely available from our online database (http://redwood.psych.cornell.edu/edges). 
Figure 16
 
A sample set of the extracted hand-labeled occlusion edges.
Figure 16
 
A sample set of the extracted hand-labeled occlusion edges.
Local statistics of hand-traced occlusion edges
Figure 17 show the distributions of Michelson contrast and RMS contrast for occlusion edges extracted using the hand tracings. In each figure part, the horizontal axis shows the contrast values and the vertical axis shows the number of occlusion edge patches. Figure 17a shows the distribution of Michelson contrast, and Figure 17b shows the distribution of RMS contrast. Figure 17a demonstrates that the distribution of Michelson contrast is uniform with a bias towards low contrast (see Discussion). Similarly, the RMS contrast distribution in Figure 17b is roughly uniform with a bias towards low RMS contrast. 
Figure 17
 
The distribution of hand-labeled occlusion edge contrast in natural scenes. (a) The distribution of Michelson contrast in hand-labeled occlusion edge patches. (b) The distribution of root-mean-square contrast in hand-labeled occlusion edge patches.
Figure 17
 
The distribution of hand-labeled occlusion edge contrast in natural scenes. (a) The distribution of Michelson contrast in hand-labeled occlusion edge patches. (b) The distribution of root-mean-square contrast in hand-labeled occlusion edge patches.
Figure 18a and b shows the one-dimensional normalized occlusion and non-occlusion edges found by the Canny algorithm. Figure 18a and b is replotted from Figure 7a and b to compare against the average hand-traced occlusion edges. Figure 18c shows the one-dimensional normalized average occlusion edge for the hand-traced images. This plot is similar to the normalized average of the occlusion edges found by the Canny algorithm. 
Figure 18
 
One-dimensional profiles of normalized average occlusion edges, nonocclusion edges, and hand-traced occlusion edges. (a) The normalized average occlusion edge in blue, with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red, with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001. (c) The normalized average hand-traced occlusion edge in blue, with 20 sample occlusion edges (details in Human-labeled occlusion edges).
Figure 18
 
One-dimensional profiles of normalized average occlusion edges, nonocclusion edges, and hand-traced occlusion edges. (a) The normalized average occlusion edge in blue, with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red, with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001. (c) The normalized average hand-traced occlusion edge in blue, with 20 sample occlusion edges (details in Human-labeled occlusion edges).
Edge classification using maximum likelihood classification
To investigate whether a local feature such as contrast can be used to classify an edge into occlusion or nonocclusion categories, we used the Michelson contrast as the local feature with a basic maximum likelihood classifier. The class which yields the maximum likelihood given the contrast of an unknown edge is the predicted class of that edge. The maximum likelihood for each edge category was computed using Bayes's theorem:  where Display FormulaImage not available is the maximum likelihood estimate of an edge class, p(Class|Contrast) is the probability of an edge class given edge contrast, p(Class) is the prior probability of occurrence of an edge class, and p(Contrast|Class) is the probability of a contrast value given the class of an edge. Of the occlusion and nonocclusion edges, 80% were randomly selected and used to train the classifier; the remaining 20% were used to test the classifier. This cross-validation scheme was iterated 100 times in order to measure the mean classification accuracy. All the occlusion and nonocclusion edges were first found by the Canny algorithm and extracted into edge patches, as described before. The prior probability of an occlusion edge (p(Class = occlusion)) was set to 0.4368 and the probability of a nonocclusion edge (p(Class = nonocclusion)) was set to 0.4338 based on 100% between-participant agreement. The remainder of the probability (0.1294) is accounted for by the edges which did not satisfy the 100% between-participant agreement (p(Class = indeterminate)). The likelihood probability of contrast for each edge class was learned using the training edges. 
Table 2 shows the prediction performance of the Michelson contrast as a local feature in classifying the remaining 20% of edges as occlusion or nonocclusion edges. Similar results were obtained using the RMS contrast as a local cue for prediction (not shown). 
Table 2
 
The confusion matrix showing the prediction ability of the Michelson contrast as a local cue in predicting whether an edge is occlusion edge or nonocclusion edge.
Table 2
 
The confusion matrix showing the prediction ability of the Michelson contrast as a local cue in predicting whether an edge is occlusion edge or nonocclusion edge.
True Class Predicted class
Occlusion edge Nonocclusion edge
Occlusion edge 83.09% ± 4.41% 16.91% ± 4.41%
Nonocclusion edge 16.45% ± 4.53% 83.55% ± 4.53%
Figure 19 shows predictions for occlusion edges (in green) and nonocclusion edges (in red) edges in natural images, using the maximum likelihood classifier. To generate these predictions, the Canny edge detection algorithm was first applied to the original images to determine the edge locations. Then, for each edge location, an edge patch of 81 × 41 pixels was extracted. Finally, for each extracted patch, the Michelson contrast was computed and used as the local feature for classification. For the first two images, most of the edges are correctly classified as occlusion or nonocclusion edges. However, the rightmost image suggests that our classifier could make numerous errors for some images. These results demonstrate that the contrast as a local feature is by itself a strong cue for predicting whether an edge is occluding or nonoccluding. 
Figure 19
 
Predicted occlusion and nonocclusion edges using only contrast as local feature in the maximum likelihood classifier. The edges in green are the predicted occlusion edges, and the edges in red are the predicted nonocclusion edges.
Figure 19
 
Predicted occlusion and nonocclusion edges using only contrast as local feature in the maximum likelihood classifier. The edges in green are the predicted occlusion edges, and the edges in red are the predicted nonocclusion edges.
Discussion
In this study, we investigated the relative proportions of occlusion and nonocclusion edges in natural scenes. We also investigated whether the local statistics (i.e., contrast) of images could provide significant information regarding the identity of occlusion and nonocclusion edges. Finally, we sorted nonocclusion edges into three subcategories (reflectance changes, cast shadows, and surface changes) and analyzed the properties of local features. The five main findings of this study are as follows: 
  1.  
    Given that an edge was detected by the Canny algorithm, approximately half of the edges were labeled as occlusion edges and half as nonocclusion edges. There was good reliability across subjects, as only 5% of the edges did not satisfy the 80% between-participant agreement criterion.
  2.  
    When the edges were detected by a Canny operator, the average contrast of occlusion edges was found to be significantly higher (p < 0.001) than the contrast of nonocclusion edges.
  3.  
    A maximum likelihood classifier with contrast as the only local feature could correctly predict 83% of human labeling decisions when classifying occlusion versus nonocclusion edges.
  4.  
    The contrast distribution of hand-labeled occlusion edges is approximately uniform with little bias towards low contrasts, whereas the distribution of occlusion edges found by the Canny algorithm is significantly different (two-sample Kolmogorov–Smirnov test, D = 0.47 p < 0.001). This implies that there are many occlusion edges that are easily identified by human observers but will be missed by common edge detection algorithms such as Canny.
  5.  
    Nonocclusion edges due to cast shadows occur relatively rarely in our collection of natural scenes compared to surface change and reflectance change edges.
Limitations of the study
Before discussing these results in detail, it is important that we point out some limitations and biases in our study. To study the statistical properties of edges, it is first necessary to identify the edges in natural scenes. Our study used two methods for classifying edges. First, we used the standard Canny edge detection algorithm to locate edges, which were then classified by human observers as occlusion or nonocclusion edges. For the second approach, we asked human observers to make the initial label by tracing the occlusion edges in natural scene images. There are limitations to both of these approaches. We do not have the ground truth regarding the cause of an edge, and both the Canny algorithm and human observers would have biases if they could be compared to the ground truth. The Canny detector has several parameters that can be varied, and although we believe we chose reasonable parameters, a different set of parameters could significantly alter the results. Furthermore, the Canny detector can only detect an edge that has a significant local luminance difference. However, observers hand labeling an image can use a variety of long-range cues as well as high-level knowledge of objects to interpret where an edge might occur. Indeed, our results show that human observers identify a significant number of edges that do not show reliable local contrast differences. As noted earlier, this is in line with a number of studies showing that many hand-labeled edges are not identifiable locally (DiMattina et al., 2012; McDermott, 2004). However, hand labeling has its own difficulties. Although hand-labeled edges have been used in a number of studies (DiMattina et al., 2012; Fowlkes et al., 2007), it is not clear what biases are introduced with this method. Certainly, one cannot ask human participants to label each and every edge in an image. Instead, our approach has participants focus on the well-defined objects in an image and not on the fine detail (e.g., grass). 
One might argue that the use of a laser range finder would provide a better estimate of ground truth. As mentioned in the Introduction, recent studies with such equipment have provided important insights relating depth changes and luminance changes (e.g., Liu, Bovik, & Cormack, 2008). However, we wish to emphasize that laser range finders also have limitations and biases. It is quite likely that lidar will be useful in the sorts of classification we performed in this study. However, it is not yet clear how accurately such a system will represent the ground truth of the scenes (e.g., 3-D structure, reflectances, and shadows). 
Occlusion edges versus nonocclusion edges
In this study, we found that when edges were defined by the Canny detector, approximately half were labeled occlusion edges and half were labeled nonocclusion edges. The sorting of edges into these two categories showed good reliability across our five observers, as only 13% of the edges did not satisfy the 100% between-participant agreement. We should note that this proportion of occlusion versus nonocclusion edges will depend on the particular set of images. It is certainly possible to find images that are dominated by nonocclusion edges. In this article, we did not include pure texture images in our image set, but such images might well consist of only nonocclusion edges. In an unpublished study, Elder et al. (1999) describe results that are broadly similar to those of this experiment classifying edge categories. Although there are important differences in the way the edges were classified, as well as differences in the choice of images and edge detector, the proportions of edge classes are largely in line with our results. Our results also show that the contrasts of the occluding edges were significantly higher than the contrasts of the nonoccluding edges (Figure 6). A number of object-recognition models begin by attempting to segregate each figure from the background (Leibe, Leonardis, & Schiele, 2008; Viola & Jones, 2001). Here, we are not arguing that figure-ground segregation must come first, but only that this information is in the local statistics. These results suggest that early visual processes could potentially assist in segregating figure from ground. 
As mentioned earlier, DiMattina et al. (2012) also used local statistics to identify occluding contours. However, our methods are different in several important ways. Their work compared labeled occluding edges to surface patches, while our work compares different classes of edges. They found that the contrasts of hand-labeled occlusion edge patches were usually higher than those of the surface patches. However, the surface patches may or may not contain unlabeled occlusion or nonocclusion edges. Our study compared the statistical differences in occlusion and nonocclusion edges. We found that the contrasts of the occlusion edges tend to be higher as compared to the contrasts of nonocclusion edges. If an edge with a high contrast is detected by the Canny edge detection algorithm, then the possibility of that edge being an occlusion edge is relatively high. Generally, most of the nonocclusion edges were found to have low contrast. 
Two graphs showing histograms of contrast (Figures 6a and 17a) may at first appear to be contradictory. Figure 6a shows that, when defined by the Canny detector, occlusion edges have higher contrasts than nonocclusion edges, and that few of the occluding edges have low contrasts. In Figure 17a, however, the contrast distribution for occluding edges appears to be relatively flat. The edges in Figure 17, however, were identified by hand labeling the image. The human observer can use a variety of long-range and high-level information to infer the presence of an edge. Observers can interpolate from far outside of the local area to estimate where an occluding edge may occur. Furthermore, a human observer can identify edges from texture boundaries that may be invisible to the Canny detector. What we conclude is that although the contrast distribution of all occluding edges may be relatively flat, the Canny detector will miss many low-contrast occluding edges. However, if a low-contrast edge is detected by the Canny operator, it is more likely to be a nonoccluding edge than an occluding edge. 
Contrast as a local cue for edge category prediction
The results of the maximum likelihood classification using these edge measurements show that contrast is a good candidate for predicting whether or not an edge is due to an occlusion. Given an unknown edge from the Canny detector with high contrast, there is a high probability that it is an occlusion edge. Furthermore, these results imply that the early visual system could in principle extract objects from their backgrounds possibly using a probabilistic classification mechanism. The visual system could begin to build probabilities about edge classifications from early processing stages using local information such as contrast, and then refine these probabilities as more information becomes available from further downstream processing. 
Nonocclusion subcategorization
We also focused on those edges that were labeled as nonocclusion edges. In our study, five participants classified these edges into three subcategories (due to a shadow, a reflectance change, a surface orientation change, or a combination of these three). We draw several conclusions from this study. First, in our collection of images, very few edges are due to cast shadows. This may be due to the actual rarity of shadow boundaries or to the Canny detector's general ineffectiveness at picking up shadows, perhaps due to the edges being too spread out, blurred, or very low contrast. Most of the nonocclusion edges were labeled as surface change or reflectance change edges. However, there was low mutual agreement across participants on the proportions of reflectance change and surface change edges. We do not believe these variations were due to the small participant pool or to participants' misunderstanding the definitions of subcategories of nonocclusion edges. It is likely that many of the nonocclusion edges do not have enough cues for participants to make a confident decision about how to classify them, or that these different subcategories are not mutually exclusive and that edges could form because of the co-occurrence of multiple causes of subcategories. For example, when there is a crease on a rock or a vein on a leaf, it can be quite difficult to determine whether there is only a surface change or whether there is also (or only) a reflectance change. The bottom row of images in Figure 1 shows examples of nonocclusion edges which did not meet the minimum 80% between-participant agreement criterion, making a definite decision about the subcategory for these edges indeed a difficult task. Clearly, a significant proportion of edges will be ambiguous in natural scenes. 
A possible account for high-contrast occlusion edges
We have one possible explanation for why occluding edges are higher in contrast than nonoccluding edges. Although the data are not conclusive on this issue, the reflectance of materials in a natural scene will range between roughly about 3% (e.g., coal) and 90% (e.g., fresh snow), with a mean reflectance of approximately 36% (Dobos, 2006). This gives a maximum range of about 30:1 for contrasts within a surface with constant illumination (Dobos, 2006; McCann & Rizzi, 2011; Radonjić, Allred, Gilchrist, & Brainard, 2011). However, an occlusion boundary will typically have some depth difference between the surfaces and will likely be composed of surfaces of two different materials and probably different illumination. This allows for the possibility that the contrast difference will be much greater. A casual inspection of the images in Figure 12 shows that the occlusion boundaries are often marked by large illumination differences. Cast shadows are also defined by illumination boundaries, and as shown in Figure 13, they have the highest contrasts for nonocclusion edges. Nonocclusion edges due to a surface change were shown to have the lowest contrasts. We have observed that many of the surface change edges result from a very small angular change of the surface with respect to the light source, producing only a small illumination difference. 
Conclusion
In this article, we present evidence that different classes of edge have significantly different local statistics. The primary difference we found was a difference in contrast. Our results suggest that there exists information regarding the cause of the edge at the earliest stages of the visual system where contrast can be estimated (i.e., the retina). At this time, we do not have evidence that the early visual system or the observer will use this information to classify edges. We can note only that the information is present. Finally, we must again emphasize that the results described here may vary with the particular algorithm used to identify an edge. We used a Canny operator, but there are a number of possible approaches, and there are a number of parameters of the Canny operator. We have explored the results using two scale parameters of the Canny operator and found largely similar results. However, there is a much larger set of parameters that could be explored. We must also note that the image set we chose (the McGill Color Image Database) may have a number of biases. It may be possible that a wider selection of images will show differences from the proportions of edge classes shown here. Nevertheless, we believe these results have important implications for how the visual system might begin object segregation in natural scenes. 
Acknowledgments
This material is based upon work supported by, or in part by, a Google Faculty Research Award to David J. Field and the National Science Foundation Grant Number 1054612 to Damon M. Chandler. We thank Professor James Cutting and his lab and Professor Khena Swallow for helpful comments on the manuscript. We thank Md Mushfiqul Alam for his assistance in running the experiments. 
Commercial relationships: Financial support was provided by Google. 
Corresponding author: Kedarnath P. Vilankar. 
Email: kpv9@cornell.edu. 
Address: Department of Psychology, Cornell University, Ithaca, NY. 
References
Atick J. J. (1992). Could information theory provide an ecological theory of sensory processing? Network, 3, 213–251. [CrossRef]
Balboa R. M. Grzywacz N. M. (2000). Occlusions and their relationship with the distribution of contrasts in natural images. Vision Research, 40 (19), 2661–2669. [CrossRef] [PubMed]
Canny J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 679–698. [CrossRef]
DiMattina C. Fox S. A. Lewicki M. S. (2012). Detecting natural occlusion boundaries using local cues. Journal of Vision, 12 (13): 15, 1–21, http://www.journalofvision.org/content/12/13/15, doi:10.1167/12.13.15. [PubMed] [Article]
Dobos E. (2006). Albedo. In Lal R. (Ed.), Encyclopedia of soil science ( 2nd ed.; pp. 64–66). Boca Raton, FL: CRC Press.
Elder J. H. (1999). Are edges incomplete? International Journal of Computer Vision, 34 (2–3), 97–122. [CrossRef]
Elder J. H. Beniaminov D. Pintilie G. (1999). Edge classification in natural images [Abstract]. Investigative Ophthalmology & Visual Science, 40: s357. (Presentation available at elderlab.yorku.ca/∼elder/?page=pub&lb=lbNone).
Field D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A, 4 (12), 2379–2394. [CrossRef]
Field D. J. Hayes A. Hess R. F. (1993). Contour integration by the human visual system: Evidence for a local “association field.” Vision Research, 33, 173–193. [CrossRef] [PubMed]
Field D. J. Tolhurst D. J. (1986). The structure and symmetry of simple-cell receptive-field profiles in the cat's visual cortex. Proceedings of the Royal Society of London, Series B: Biological Sciences, 228 (1253), 379–400. [CrossRef]
Fowlkes C. C. Martin D. R. Malik J. (2007). Local figure–ground cues are valid for natural images. Journal of Vision, 7 (8): 2, 1–9, http://www.journalofvision.org/content/7/8/2, doi:10.1167/7.8.2. [PubMed] [Article]
Geisler W. Perry J. Super B. Gallogly D. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41 (6), 711–724. [CrossRef] [PubMed]
Heitger F. von der Heydt R. Kubler O. (1994). A computational model of neural contour processing: Figure-ground segregation and illusory contours. From Perception to Action Conference ( pp. 181–192). Los Alamitos, CA: IEEE.
Hoiem D. Efros A. A. Hebert M. (2011). Recovering occlusion boundaries from an image. International Journal of Computer Vision, 91 (3), 328–346. [CrossRef]
Howe C. Q. Purves D. (2002). Range image statistics can explain the anomalous perception of length. Proceedings of the National Academy of Sciences, USA, 99 (20), 13184–13188. [CrossRef]
Huang J. Lee A. B. Mumford D. (2000). Statistics of range images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ( Vol. 1, pp. 324–331). Hilton Head Island, SC: IEEE.
Hyvärinen A. Hoyer P. O. (2001). A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41, 2413–2423. [CrossRef] [PubMed]
Ing A. D. Wilson J. A. Geisler W. S. (2010). Region grouping in natural foliage scenes: Image statistics and human performance. Journal of Vision, 10 (4): 10, 1–19, http://www.journalofvision.org/content/10/4/10, doi:10.1167/10.4.10. [PubMed] [Article] [CrossRef] [PubMed]
Jones J. P. Palmer L. A. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58 (6), 1233–1258. [PubMed]
Kersten D. (1987). Predictability and redundancy of natural images. Journal of the Optical Society of America A, 4, 2395–2400. [CrossRef]
Leclerc Y. G. Zucker S. W. (1987). The local structure of image discontinuities in one dimension. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9, 341–355. [CrossRef] [PubMed]
Leibe B. Leonardis A. Schiele B. (2008). Robust object detection with interleaved categorization and segmentation. International Journal of Computer Vision, 77 (1–3), 259–289. [CrossRef]
Li W. Gilbert C. D. (2002). Global contour saliency and local colinear interactions. Journal of Neurophysiology, 88 (5), 2846–2856. [CrossRef] [PubMed]
Liu Y. Bovik A. C. Cormack L. K. (2008). Disparity statistics in natural scenes. Journal of Vision, 8 (11): 19, 1–14, http://www.journalofvision.org/content/8/11/19, doi:10.1167/8.11.19. [PubMed] [Article]
Liu Y. Cormack L. K. Bovik A. C. (2011). Statistical modeling of 3-D natural scenes with application to Bayesian stereopsis. IEEE Transactions on Image Processing, 20 (9), 2515–2530. [PubMed]
Mante V. Frazor R. A. Bonin V. Geisler W. S. Carandini M. (2005). Independence of luminance and contrast in natural scenes and in the early visual system. Nature Neuroscience, 8 (12), 1690–1697. [CrossRef] [PubMed]
Martin D. R. Fowlkes C. Tal D. Malik J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Proceedings of the Eighth IEEE International Conference on Computer Vision ( Vol. 2, pp. 416–423). Vancouver, BC: IEEE.
Martin D. R. Fowlkes C. C. Malik J. (2004). Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 530–549. [CrossRef] [PubMed]
McCann J. J. Rizzi A. (2011). The art and science of HDR imaging ( Vol. 26). Chicester, West Sussex, UK: John Wiley & Sons.
McDermott J. (2004). Psychophysics with junctions in real images. Perception, 33 (9), 1101–1128. [CrossRef] [PubMed]
Olmos A. Kingdom F. A. A. (2004). A biologically inspired algorithm for the recovery of shading and reflectance images. Perception, 33, 1463–1473. [CrossRef] [PubMed]
Olshausen B. A. Field D. J. (1996). Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37, 3311–3325. [CrossRef]
Potetz B. Lee T. S. (2003). Statistical correlations between two-dimensional images and three-dimensional structures in natural scenes. Journal of the Optical Society of America A, 20 (7), 1292–1303. [CrossRef]
Potetz B. Lee T. S. (2005). Scaling laws in natural scenes and the inference of 3D shape. In Weiss Y. Schölkopf B. Platt J. C. (Eds.), Advances in neural information processing systems 18 (pp. 1089–1096). Cambridge, MA: MIT Press.
Radonjić A. Allred S. R. Gilchrist A. L. Brainard D. H. (2011). The dynamic range of human lightness perception. Current Biology, 21 (22), 1931–1936. [CrossRef] [PubMed]
Shashua A. Ullman S. (1990). Grouping contours by iterated pairing networks. In Lippmann R. P. Moody J. E. Touretzky D. S. (Eds.), Advances in neural information processing systems 3 ( pp. 335–341). Burlington, MA: Morgan-Kuafmann.
Simoncelli E. P. Olshausen B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24 (1), 1193–1216. [CrossRef] [PubMed]
van Hateren J. H. van der Schaaf A. (1998). Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London, Series B: Biological Sciences, 265, 359–366. [CrossRef]
Vecera S. P. Vogel E. K. Woodman G. F. (2002). Lower region: A new cue for figure-ground assignment. Journal of Experimental Psychology: General, 131 (2), 194–205. [CrossRef] [PubMed]
Viola P. Jones M. (2001). Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition ( Vol. 1, pp. 511–518). Kauai, HI: IEEE Computer Society.
Yang Z. Purves D. (2003a). Image/source statistics of surfaces in natural scenes. Network: Computation in Neural Systems, 14 (3), 371–390. [CrossRef]
Yang Z. Purves D. (2003b). A statistical explanation of visual space. Nature Neuroscience, 6 (6), 632–640. [CrossRef]
Zhou C. Mel B. W. (2008). Cue combination and color edge detection in natural scenes. Journal of Vision, 8(4); 4, 1–25, http://www.journalofvision.org/content/8/4/4, doi:10.1167/8.4.4. [PubMed] [Article]
Figure 1
 
Examples of occlusion and nonocclusion edges. The first image is a modified version of Adelson's checkerboard illusion (web.mit.edu/persci/people/adelson/checkershadow_illusion.html).
Figure 1
 
Examples of occlusion and nonocclusion edges. The first image is a modified version of Adelson's checkerboard illusion (web.mit.edu/persci/people/adelson/checkershadow_illusion.html).
Figure 2
 
Images from the McGill Color Image Database used in the study.
Figure 2
 
Images from the McGill Color Image Database used in the study.
Figure 3
 
The graphical user interface for the experiment. The image on top shows the interface used to make a decision between occlusion and nonocclusion categories using the horizontal slider. The image below shows the interface with the triangular slider used for the subcategorization of nonocclusion edges after the participant has rated the edge to be in the nonocclusion category with a confidence of 75% or higher.
Figure 3
 
The graphical user interface for the experiment. The image on top shows the interface used to make a decision between occlusion and nonocclusion categories using the horizontal slider. The image below shows the interface with the triangular slider used for the subcategorization of nonocclusion edges after the participant has rated the edge to be in the nonocclusion category with a confidence of 75% or higher.
Figure 4
 
(a) An 81 × 41-pixel extracted edge patch. The patch is aligned such that the higher luminance side is on top and the lower luminance side is on the bottom. The edge line between the two sides is at the 41st pixel row. (b) A sample of the extracted occlusion edges. (c) A sample of the extracted nonocclusion edges. Both sets of extracted edges were first identified using the Canny edge operator and then classified by human observers.
Figure 4
 
(a) An 81 × 41-pixel extracted edge patch. The patch is aligned such that the higher luminance side is on top and the lower luminance side is on the bottom. The edge line between the two sides is at the 41st pixel row. (b) A sample of the extracted occlusion edges. (c) A sample of the extracted nonocclusion edges. Both sets of extracted edges were first identified using the Canny edge operator and then classified by human observers.
Figure 5
 
A set of edge patches not selected for the statistical analysis of occlusion and nonocclusion edges. These patches have multiple edges in the extracted patch.
Figure 5
 
A set of edge patches not selected for the statistical analysis of occlusion and nonocclusion edges. These patches have multiple edges in the extracted patch.
Figure 6
 
The histograms of Michelson contrast and root-mean-square (RMS) contrast for occlusion and nonocclusion edges. (a–b) Histograms of Michelson contrast in occlusion and nonocclusion edge patches. (c–d) Histograms of RMS contrast in occlusion and nonocclusion edge patches. (e–f) Empirical cumulative distribution functions (CDFs) of Michelson contrast and RMS contrast in occlusion and nonocclusion edge patches. The blue curve shows the CDF of contrast in occlusion edges, and the red curve shows the CDF of contrast in nonocclusion edges.
Figure 6
 
The histograms of Michelson contrast and root-mean-square (RMS) contrast for occlusion and nonocclusion edges. (a–b) Histograms of Michelson contrast in occlusion and nonocclusion edge patches. (c–d) Histograms of RMS contrast in occlusion and nonocclusion edge patches. (e–f) Empirical cumulative distribution functions (CDFs) of Michelson contrast and RMS contrast in occlusion and nonocclusion edge patches. The blue curve shows the CDF of contrast in occlusion edges, and the red curve shows the CDF of contrast in nonocclusion edges.
Figure 7
 
One-dimensional profiles of normalized average occlusion and nonocclusion edges. (a) The normalized average occlusion edge in blue with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001.
Figure 7
 
One-dimensional profiles of normalized average occlusion and nonocclusion edges. (a) The normalized average occlusion edge in blue with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001.
Figure 8
 
Scatter plots of the mean luminance versus contrast of occlusion and nonocclusion edge patches. (a) Scatter plot of mean luminance versus Michelson contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.13, p = 0.02. (b) Scatter plot of mean luminance versus root-mean-square contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.10, p = 0.06. The blue open circles represent occlusion edge patches, and the red open circles represent nonocclusion edge patches.
Figure 8
 
Scatter plots of the mean luminance versus contrast of occlusion and nonocclusion edge patches. (a) Scatter plot of mean luminance versus Michelson contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.13, p = 0.02. (b) Scatter plot of mean luminance versus root-mean-square contrast of occlusion edge patches, correlation r(328) = 0.08, p = 0.15, and nonocclusion edge patches, correlation r(341) = −0.10, p = 0.06. The blue open circles represent occlusion edge patches, and the red open circles represent nonocclusion edge patches.
Figure 9
 
The density of the triangular slider placement for each partipant, and the overall average density of placement of the slider from the mean positions of the slider across participants for each edge. The overall density is not the sum of slider position of all participants. Each point corresponding to an edge in the overall density map is the result of averaging the position slider for that edge across all participants (vertices: Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]).
Figure 9
 
The density of the triangular slider placement for each partipant, and the overall average density of placement of the slider from the mean positions of the slider across participants for each edge. The overall density is not the sum of slider position of all participants. Each point corresponding to an edge in the overall density map is the result of averaging the position slider for that edge across all participants (vertices: Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]).
Figure 10
 
(a) The triangular slider for subcategorization was divided into three regions representing the three subcategories (Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]) of nonocclusion edges. (b) The proportions of subcategories of nonocclusion edges for each participant and overall mean proportions. Overall, approximately 31% of edges were due to reflectance changes, 8% were due to cast shadows, and 56% were due to surface changes.
Figure 10
 
(a) The triangular slider for subcategorization was divided into three regions representing the three subcategories (Reflectance Change [RC], Cast Shadow [CS], and Surface Change [SC]) of nonocclusion edges. (b) The proportions of subcategories of nonocclusion edges for each participant and overall mean proportions. Overall, approximately 31% of edges were due to reflectance changes, 8% were due to cast shadows, and 56% were due to surface changes.
Figure 11
 
(a) The proportions of subcategories of nonocclusion edges at different degrees of mutual agreement between participants. (b) The three ellipses show the variations in horizontal and vertical directions in each region, corresponding to the three subcategories of nonocclusion edges. The three asterisks represent the mean placement of the slider for all edges in each region. The colored circles represent the mean placement of the slider for each edge.
Figure 11
 
(a) The proportions of subcategories of nonocclusion edges at different degrees of mutual agreement between participants. (b) The three ellipses show the variations in horizontal and vertical directions in each region, corresponding to the three subcategories of nonocclusion edges. The three asterisks represent the mean placement of the slider for all edges in each region. The colored circles represent the mean placement of the slider for each edge.
Figure 12
 
Samples of occlusion and nonocclusion edges categorized with at least 80% between-participant mutual agreement. Each edge shown here is bounded by a red box. The first row shows edges categorized as occlusion edges. The next three rows correspond to edges categorized as reflectance changes, cast shadows, and surface changes, and the bottom row shows the indeterminate edges which did not meet 80% between-participant mutual agreement.
Figure 12
 
Samples of occlusion and nonocclusion edges categorized with at least 80% between-participant mutual agreement. Each edge shown here is bounded by a red box. The first row shows edges categorized as occlusion edges. The next three rows correspond to edges categorized as reflectance changes, cast shadows, and surface changes, and the bottom row shows the indeterminate edges which did not meet 80% between-participant mutual agreement.
Figure 13
 
The mean contrast of each subcategory at different degrees of between-participant mutual agreement. The error bars represent the mean standard deviations of contrast in each category. The contrast difference between each category was statistically significant, F(2, 193) = 99.92, p < 0.0001.
Figure 13
 
The mean contrast of each subcategory at different degrees of between-participant mutual agreement. The error bars represent the mean standard deviations of contrast in each category. The contrast difference between each category was statistically significant, F(2, 193) = 99.92, p < 0.0001.
Figure 14
 
One-dimensional profiles of normalized average nonocclusion edge subcategories. (a) The normalized average reflectance change (RC) nonocclusion edge in red, with 20 sample occlusion edges. (b) The normalized average cast shadow (CS) nonocclusion edge in red, with 13 sample occlusion edges. (c) The normalized average surface change (SC) nonocclusion edge in red, with 20 sample occlusion edges. The slope of CS edges was significantly different from those of RC (p < 0.01) and SC (p < 0.01) edges.
Figure 14
 
One-dimensional profiles of normalized average nonocclusion edge subcategories. (a) The normalized average reflectance change (RC) nonocclusion edge in red, with 20 sample occlusion edges. (b) The normalized average cast shadow (CS) nonocclusion edge in red, with 13 sample occlusion edges. (c) The normalized average surface change (SC) nonocclusion edge in red, with 20 sample occlusion edges. The slope of CS edges was significantly different from those of RC (p < 0.01) and SC (p < 0.01) edges.
Figure 15
 
(a) Original high-resolution image and (b) occlusion edge tracing in red from a participant.
Figure 15
 
(a) Original high-resolution image and (b) occlusion edge tracing in red from a participant.
Figure 16
 
A sample set of the extracted hand-labeled occlusion edges.
Figure 16
 
A sample set of the extracted hand-labeled occlusion edges.
Figure 17
 
The distribution of hand-labeled occlusion edge contrast in natural scenes. (a) The distribution of Michelson contrast in hand-labeled occlusion edge patches. (b) The distribution of root-mean-square contrast in hand-labeled occlusion edge patches.
Figure 17
 
The distribution of hand-labeled occlusion edge contrast in natural scenes. (a) The distribution of Michelson contrast in hand-labeled occlusion edge patches. (b) The distribution of root-mean-square contrast in hand-labeled occlusion edge patches.
Figure 18
 
One-dimensional profiles of normalized average occlusion edges, nonocclusion edges, and hand-traced occlusion edges. (a) The normalized average occlusion edge in blue, with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red, with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001. (c) The normalized average hand-traced occlusion edge in blue, with 20 sample occlusion edges (details in Human-labeled occlusion edges).
Figure 18
 
One-dimensional profiles of normalized average occlusion edges, nonocclusion edges, and hand-traced occlusion edges. (a) The normalized average occlusion edge in blue, with 20 sample occlusion edges. (b) The normalized average nonocclusion edge in red, with 20 sample nonocclusion edges. The edges in (a) and (b) were first detected by the Canny operator and then categorized by participants as occlusion or nonocclusion edges. The slope of occlusion edges was significantly different from the slope of nonocclusion edges, t(671) = 16.08, p < 0.0001. (c) The normalized average hand-traced occlusion edge in blue, with 20 sample occlusion edges (details in Human-labeled occlusion edges).
Figure 19
 
Predicted occlusion and nonocclusion edges using only contrast as local feature in the maximum likelihood classifier. The edges in green are the predicted occlusion edges, and the edges in red are the predicted nonocclusion edges.
Figure 19
 
Predicted occlusion and nonocclusion edges using only contrast as local feature in the maximum likelihood classifier. The edges in green are the predicted occlusion edges, and the edges in red are the predicted nonocclusion edges.
Table 1
 
The proportion of occlusion and nonocclusion edges at various degrees of mutual agreement between participants. The fourth column shows the proportion of edges which did not satisfy the between-participant agreement criteria.
Table 1
 
The proportion of occlusion and nonocclusion edges at various degrees of mutual agreement between participants. The fourth column shows the proportion of edges which did not satisfy the between-participant agreement criteria.
Mutual agreement Occlusion edge Nonocclusion edge Edges not satisfying the criterion
100% (5 out of 5) 44% 43% 13%
80% (4 out of 5) 48% 47% 5%
60% (3 out of 5) 50% 50% 0%
Table 2
 
The confusion matrix showing the prediction ability of the Michelson contrast as a local cue in predicting whether an edge is occlusion edge or nonocclusion edge.
Table 2
 
The confusion matrix showing the prediction ability of the Michelson contrast as a local cue in predicting whether an edge is occlusion edge or nonocclusion edge.
True Class Predicted class
Occlusion edge Nonocclusion edge
Occlusion edge 83.09% ± 4.41% 16.91% ± 4.41%
Nonocclusion edge 16.45% ± 4.53% 83.55% ± 4.53%
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×