Free
Research Article  |   December 2010
Exact feature probabilities in images with occlusion
Author Affiliations
Journal of Vision December 2010, Vol.10, 42. doi:https://doi.org/10.1167/10.14.42
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xaq Pitkow; Exact feature probabilities in images with occlusion. Journal of Vision 2010;10(14):42. https://doi.org/10.1167/10.14.42.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To understand the computations of our visual system, it is important to understand also the natural environment it evolved to interpret. Unfortunately, existing models of the visual environment are either unrealistic or too complex for mathematical description. Here we describe a naturalistic image model and present a mathematical solution for the statistical relationships between the image features and model variables. The world described by this model is composed of independent, opaque, textured objects, which occlude each other. This simple structure allows us to calculate the joint probability distribution of image values sampled at multiple arbitrarily located points, without approximation. This result can be converted into probabilistic relationships between observable image features as well as between the unobservable properties that caused these features, including object boundaries and relative depth. We show that the image model is sufficient to explain a wide range of natural scene properties. Finally, we discuss the implications of this description of natural scenes for the study of vision.

Introduction
A major goal of vision is to identify physical objects in the world and their attributes. The relevant sensory evidence—an image—is ambiguous. A visual system must make guesses to interpret this sensory information, and good guesses should account for the statistics of the input. Consequently, the statistical structure of natural images has become a subject of fundamental importance for applications ranging from computer graphics to neuroscience: Understanding and exploiting natural regularities should lead to better visual performance and improved visual representations, whether in image compression or in the brain. 
Most previous studies of natural scene statistics have characterized natural scenes as linear superpositions of image features. Principal Components Analysis (Hancock, Baddeley, & Smith, 1992; Liu & Shouval, 1994), Independent Components Analysis (Bell & Sejnowski, 1997; Olshausen & Field, 1996), wavelet transforms (Portilla & Simoncelli, 2000), and linear clutter models (Grenander & Srivastava, 2001; Mumford & Gidas, 2001) each identify related sets of features that when added together can reconstitute or approximate a natural image. Other methods have improved upon these purely linear, additive descriptions by including multiplicative modulation (e.g., Gaussian scale mixtures (Wainwright & Simoncelli, 2000) and hierarchically correlated variances (Karklin & Lewicki, 2005)). These non-linear enhancements are useful for representing textures, where common variables like surface properties and illumination intensity and direction naturally comodulate the contrast of features like local orientation. Yet because visual images are not caused by summation but by occlusion, it is important to develop models of natural images that are constructed using more accurate non-linear combinations of features (Reinagel & Laughlin, 2001; Simoncelli & Olshausen, 2001). 
We therefore chose a simplified model of natural images, colorfully known as the dead leaves model (Matheron, 1975), for which occlusion is fundamental: The virtual world described by this model is composed of an infinitely deep stack of randomly positioned, flat objects (“leaves”) that occlude each other (Figures 1A and 1B). These objects have attributes of size, shape, brightness, and texture, each independently drawn from specified distributions. While the model is only an approximation to our true physical environment, nonetheless it generates images that share many important attributes with natural images, most obviously the ubiquitous boundaries between relatively homogeneous regions (Ruderman, 1997). If the object sizes are drawn from an inverse-cubic power law distribution within finite upper and lower bounds, the model becomes approximately scale-invariant (Gousseau & Roueff, 2007; Lee, Mumford, & Huang, 2001; Ruderman, 1997) and reproduces several known statistical properties of natural images, including the spatial power spectrum (Ruderman, 1997) and bivariate distributions of pixel intensities and wavelet coefficients (Lee et al., 2001). Despite the demonstrated utility of the model, until now there has been no way to calculate higher order statistical properties of interest except by empirical sampling, which cannot provide the insights that exact results can. 
Figure 1
 
Sample images generated by the dead leaves model. We see layers of objects with random sizes, shapes, positions, and brightnesses that occlude other objects below. (A) All objects are black or white circles with a relatively narrow range of sizes. (B) All objects are textured ellipses with a broad range of sizes drawn from a distribution proportional to size−3, producing approximate scale invariance (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). Straightforward generalizations allow other ensembles of shape and texture. (C) Illustration of an object membership function Image not available . Pixels within a member set of Image not available all sampled from the same object. A sample dead leaves image with several objects (gray circles) and a set of six pixel locations (numbered points) are shown. For this configuration, the object membership function is Image not available = {126∣3∣45}.
Figure 1
 
Sample images generated by the dead leaves model. We see layers of objects with random sizes, shapes, positions, and brightnesses that occlude other objects below. (A) All objects are black or white circles with a relatively narrow range of sizes. (B) All objects are textured ellipses with a broad range of sizes drawn from a distribution proportional to size−3, producing approximate scale invariance (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). Straightforward generalizations allow other ensembles of shape and texture. (C) Illustration of an object membership function Image not available . Pixels within a member set of Image not available all sampled from the same object. A sample dead leaves image with several objects (gray circles) and a set of six pixel locations (numbered points) are shown. For this configuration, the object membership function is Image not available = {126∣3∣45}.
Here we derive an exact solution to the dead leaves model, by calculating joint probability distributions explicitly for arbitrary image features. This solution also provides a principled way to relate the features in a dead leaves image to the unobserved object attributes that cause these features. Since these relationships are precisely what we rely upon to see, this result thereby elevates the dead leaves model from an interesting approximation of natural images to a valuable tool for modeling perceptual inference and neural computation in the visual system. 
To illustrate how this solution helps us understand natural scenes, we apply it to explain the highly non-Gaussian probability distributions of edge features. Edges are important because they describe stimuli to which neurons in the early visual system are sensitive and because high-order correlations between them reflect the physical objects and attributes in the visual world. Since the functional significance of neural responses to features can depend on the shape of the feature distribution (Laughlin, 1981; Sharpee & Bialek, 2007; Zetzsche & Nuding, 2005), it is important to understand why the distributions have their observed structure. 
First, we look specifically at the marginal, joint, and conditional distributions of wavelet coefficients, i.e., the image overlap with localized, oriented filters. In natural images, those marginal distributions have heavy tails (Mallat, 1989; Ruderman & Bialek, 1994; Wainwright & Simoncelli, 2000), and the joint and conditional distributions have peculiar shapes (diamonds, pillows, bowties) that depend on the orientation and distance between the wavelets (Buccigrossi & Simoncelli, 1999; Lee et al., 2001; Simoncelli & Schwartz, 1999). We show how these non-Gaussian distributions arise naturally from occlusion by spatially extended objects, even when the objects themselves have Gaussian-distributed textures. Second, we compute the likelihood that a given pair of local object boundaries comes from the same physical contour. When estimated empirically from natural images, this likelihood predicts human judgments about contours (Geisler, Perry, Super, & Gallogly, 2001). Our solution of the dead leaves model recovers the empirical statistics but only if one properly accounts for the relative depths at local boundaries, implicating depth cues in simple judgments about contours. Finally, we explain how scale invariance and occlusion interact to reproduce the natural statistical relationships between edges. 
Results
Solving the dead leaves model
The pixels of a dead leaves image are fully determined by the properties of objects that are at least partially unoccluded. These properties are drawn independently from specified distributions over position, depth, size and shape, and texture. Texture can include both mean intensity and (possibly correlated) variations about the mean. When we say that we have solved the dead leaves model, we mean that we can calculate the joint probabilities of any model variables of interest, whether pixel intensities or object properties. This would be straightforward if the image components were related by linear superposition but is much more difficult due to the strong non-linearity of occlusion. 
The essential property that makes the dead leaves model tractable is that different objects have independent attributes. Others have invoked the independence of object properties to derive the two-point correlation functions (Ruderman, 1997) and bivariate intensity probabilities (Lee et al., 2001) using a recursive argument that accounts for the way nearby objects occlude more distant ones. We were able to generalize this calculation from two points to an arbitrary collection of N pixels, for which we can now calculate the multivariate joint intensity distribution. This distribution can then be transformed into feature probabilities and related to the unobserved object properties. 
If one samples the intensity of a particular dead leaves image at various locations, each pixel value will be determined by the texture of whichever object is at the top of the stack at that location. All pixels that fall into the same object share its texture and are thereby correlated; pixels sampling from different objects are independent. Thus, if we can specify how the pixels are divided geometrically into objects, then we know the complete correlation structure for that image. 
We can mathematically describe the configuration of objects at a given set of N pixels by defining an object membership function,Image not available, designating which pixels are “members” of which objects. (The symbol Image not available is the Hebrew letter mem, chosen to evoke the word membership.) In mathematical language, Image not available is a set partition of the N pixels, so it is technically a set of sets: each set corresponds to a different object, and it contains the pixel locations at which that object is unobscured by any other objects. For example, one might find in a given image that pixels x1, x2, and x6 fall into one object, x4 and x5 fall into a different object, and x3 is alone in a third object (Figure 1C). Then, the corresponding object membership function can be expressed as Image not available = {{x1,x2,x6},{x3},{x4,x5}}, or abbreviated as Image not available = {126∣3∣45}. 
The object membership function does not contain information directly about the intensities but only about which pixels are correlated. Given a particular object membership Image not available for some selected pixels, the probability distribution P(IImage not available) of image intensities I factorizes into a product over objects: The different object textures are independent, and hence so are their respective pixels. In the above example, the probability distribution of intensities at those six pixels would be P(IImage not available) = P(I1, I2, I6Image not available)P(I3Image not available)P(I4, P5Image not available). In general, where ∣Image not available∣ is the number of objects, Image not availablen is the set of pixels falling into the nth object of Image not available, and IImage not availablen is a vector of intensities at those pixels. The factors P(IImage not availablenImage not available) reflect the joint probabilities of intensities in a single, textured object. This formulation requires that we specify a texture model to provide these probabilities. The texture model may be a simple distribution over object brightness (as in Figure 1A) or may include extra hidden variables for each object, such as texture variance and orientation (as in Figure 1B). Note that the texture model is wholly unrelated to the geometrical aspects of the dead leaves model. 
If the geometric configuration of objects is not known, then the joint distribution of intensities P(I) is an average over all possible configurations. The factorized conditional distributions of Equation 1 are then combined in the weighted sum This is a mixture distribution in which each mixture component P(IImage not available) has a distinct correlation structure among pixels, induced by the different object membership functions. The weighting coefficients are object membership probabilities P(Image not available), i.e., the probability of observing the corresponding memberships over all possible dead leaves images with a given shape ensemble. Figure 2 shows examples of simple mixture distributions. 
Figure 2
 
Joint probabilities of pixel intensities, based on an ensemble of elliptical objects and Gaussian-distributed object intensities with an additive Gaussian white noise texture (Methods section). Contour plots are shown for (A) two pixels and (B) three pixels arranged in an equilateral triangle. These joint distributions are weighted averages of independent and correlated distributions. The weighting factors are the various object membership probabilities P( Image not available ), which are plotted below the joint intensity distributions as a function of the distance between pixels.
Figure 2
 
Joint probabilities of pixel intensities, based on an ensemble of elliptical objects and Gaussian-distributed object intensities with an additive Gaussian white noise texture (Methods section). Contour plots are shown for (A) two pixels and (B) three pixels arranged in an equilateral triangle. These joint distributions are weighted averages of independent and correlated distributions. The weighting factors are the various object membership probabilities P( Image not available ), which are plotted below the joint intensity distributions as a function of the distance between pixels.
The object membership probability P(Image not available) represents how frequently the N selected pixels are grouped into different objects according to Image not available. We calculate each probability recursively, generalizing an argument of Ruderman (1997). To do so, we must introduce some additional notation. We designate Image not available\n as the object membership function that remains after removing the nth object. We also define a Boolean vector σ(Image not available,n) with N components σi(Image not available,n) = (xiImage not availablen) that each indicate whether the pixel xi is contained in the nth object of Image not available. For instance, σ({126∣3∣45},3) = (0,0,0,1,1,0). 
By construction, there is a sequence of objects in any dead leaves image, ordered by depth. Consider only the topmost object. There is some probability, which we will denote by Qσ(Image not available,n), that this top object includes all of the pixels in the set Image not availablen, while excluding all the other pixels in Image not available\n. Such an arrangement partially satisfies the membership constraint imposed by Image not available. However, for this object configuration to contribute to P(Image not available), we still need to ensure that the excluded pixels are also grouped appropriately by objects “deeper” in the image. The probability that deeper objects satisfy these reduced membership constraints is P(Image not available\n). Note that this probability is unaffected by whether the deeper objects would have enclosed the pixels in Image not availablen: Objects at those positions are already occluded by the top object. There is also a probability Qσ(Image not available,0) that the top object contains none of the N selected pixels. Given this event, the probability of finding objects deeper in the stack that satisfy the membership constraints is just the original factor P(Image not available). Summing together all possibilities for the top object, we find P(Image not available) = Qσ(Image not available,0)P(Image not available) + ∑n=1Image not availableQσ(Image not available,n)P(Image not available\n). Solving for P(Image not available) gives the recursion relation Crucially, the image that remains below the top object is yet another dead leaves image, with all the same statistical properties as before, so we can calculate P(Image not available\n) by the same formula, recursively. Eventually, the recursion terminates when there are no pixels left in Image not available, with P(Ø) = 1. 
This recursive equation applies universally to any dead leaves model with independent, occluding objects, regardless of shape. In contrast, the factors Qσ(Image not available,n) depend on the particular shape ensemble and the chosen set of pixels. In the Methods section, we derive the general form of these factors for arbitrary shapes with smooth boundaries. The Supplementary material provides mathematical details of the calculation for the scale-invariant ensemble of elliptical shapes used from here onward, as well as the software implementing the calculation. 
The number of possible object membership functions quickly grows large as we consider more pixels. The limiting step is the number of possible object membership functions, known as Bell's number BN, which unfortunately grows slightly faster than exponentially. In practice, this restricts exact calculation to around a dozen pixels. Despite this limitation, interesting insights can be gained both by using few pixels or few subsets of possible object memberships and by analyzing the general behavior in various limits. For instance, in low-clutter conditions when the maximal distance between pixels u is much smaller than the minimum object size r (e.g., Figure 1A), object membership probabilities P(Image not available) behave as P(Image not available) ∼ (u/r)Image not available∣−1 (Figure 2). Consequently, edges are rare (∣Image not availableedge∣ = 2), T-junctions are rarer (∣Image not availableT-junc∣ = 3), and every other feature is rarer still (∣Image not available∣ > 3). In the sections below, we use the solution of the dead leaves model and relevant approximations to explain complex statistical properties of natural scenes. 
Feature distributions
In this section, we calculate the joint feature probabilities in specific cases where features are linear functions of the pixel intensities, f = F I for some filter matrix F. We set the filters of F to be wavelets, local derivative operators (edge detectors) with a given orientation and scale. Choosing Haar wavelets, which weight intensities by ±1, emphasizes non-Gaussianity of natural feature distributions and thereby establishes a more stringent test for the image model (Lee et al., 2001). 
It has been previously reported that empirical histograms of different Haar wavelets and wavelet pairs in the dead leaves model qualitatively reproduce the marginal and joint distributions in log-transformed natural scenes (Lee et al., 2001). Where empirical sampling can, at best, expose these interesting statistical similarities, our analytical results let us understand their origins. 
We choose an especially simple texture model, namely a uniform, Gaussian-distributed intensity for each object. We also superpose a small Gaussian white noise to broaden the delta-function correlations present in truly uniform objects. By giving all objects simple Gaussian textures, we ensure that all non-Gaussian statistics in the model must be the result of occlusion. 
Marginal distributions of wavelet coefficients
One well-described feature of log-transformed natural images is that the distribution of spatial derivatives Δ has heavy tails (Figure 3A) well approximated as a generalized Laplace distribution P(Δ) ∝ e −∣Δ∣ β for an exponent β near 1 (Lee et al., 2001; Mallat, 1989; Ruderman & Bialek, 1994; Wainwright & Simoncelli, 2000). The heavy tails in these distributions cannot be obtained from a standard correlated Gaussian model, because any projection of a multidimensional Gaussian is again Gaussian. Higher order statistical structure is required. 
Figure 3
 
Log-probability feature distributions log P(f) of spatial derivatives f. (A) Empirically sampled distribution of derivatives (depicted graphically, inset) in natural images. (B, C) Feature probabilities calculated exactly for dead leaves images, using just one and two samples per image patch, respectively (inset). (D) Empirically sampled distributions for dead leaves images using a 16 × 16 grid of samples per patch (inset). (E, F) Mixture components P(fImage not available ) corresponding to (B) and (C), respectively.
Figure 3
 
Log-probability feature distributions log P(f) of spatial derivatives f. (A) Empirically sampled distribution of derivatives (depicted graphically, inset) in natural images. (B, C) Feature probabilities calculated exactly for dead leaves images, using just one and two samples per image patch, respectively (inset). (D) Empirically sampled distributions for dead leaves images using a 16 × 16 grid of samples per patch (inset). (E, F) Mixture components P(fImage not available ) corresponding to (B) and (C), respectively.
This distribution can be calculated exactly for the dead leaves model by representing the local derivative by a simple feature: the intensity difference between nearby points, f = I 1I 2. The resultant feature distribution is a mixture of two components, a narrow central peak and a broader tail (Figures 3B and 3E). While this is a more kurtotic distribution than the Gaussian texture, it does not closely match natural derivative histograms (Figure 3A). 
A simple consideration can account for the discrepancy. In our solution of the dead leaves model, what we have described so far as pixels are actually samples at infinitesimal points. In contrast, pixels in natural images represent light accumulated over some finite sensor area set by film grain, camera sensor wells, or photoreceptor cross-sections. This means that measured pixel values do not directly reflect an intensity sampled from an object but instead reflect integrals over unresolved subpixel details. When many samples are summed over some region Xi, one might naively expect the total
I
i =
j:xjXi
Ij to be Gaussianly distributed. However, the usual central limit theorem does not apply, because of the correlations between the variables that are induced by spatially extended objects. These correlations can be segregated by re-expressing the total intensity in an image patch as a sum of the mean intensities in visible objects, weighted by their visible areas,
I
i = ∑n=1Image not availableImage not availableImage not availablenJn, where ∣Image not availablen∣ is the number of pixels sampling from the nth object, i.e., its visible area, and Jn is the average intensity in that area. The summands ∣Image not availablenJn are approximately independent because they correspond to different objects. (They are not strictly independent because the visible areas ∣Image not availablen∣ are constrained to add up to the total area of the patch.) This way of writing
I
i reveals two reasons why the sum does not converge to a normal distribution: the number of summands is a random variable, and the summands themselves have long-tailed distributions. 
Scale invariance demands that the areas of homogeneous regions be distributed as a power law with exponent 2 (Alvarez, Gousseau, & Morel, 1999). In the dead leaves model, the homogeneous regions are the visible parts of each object. If the mean intensity within an object, Jn, is distributed more narrowly than this, then the distribution of visible areas ∣Image not availablen∣ will dominate the tail behavior of the products ∣Image not availablenJn.
I
i is thus a sum over a random number (one per visible object) of power-law-distributed terms. A generalized central limit theorem holds that the distribution of such random sums is a two-sided exponential distribution when summands are power-law distributed with exponents of 2 or higher (Kotz, Kozubowski, & Podgórski, 2001; Figure 3D).1 Indeed, Figure 3D shows nearly straight tails on the log-probability plot. In natural images, areas of homogeneous regions are again distributed as power laws, but depending on the particular image, the exponents can be slightly below 2 (Alvarez et al., 1999). Another generalized central limit theorem shows that under such conditions the distribution of the sum has slightly heavier tails (Gnedenko, 1972), as observed (Figure 3A). While these considerations apply directly to the average pixel intensities, they pertain equally to intensity differences. 
We can visualize how the heavy tails emerge by plotting feature distributions conditioned on various object configurations. When many different objects are visible, the independent object intensities tend to average out giving a narrow distribution; when few objects are visible, their areas are large, so the few object intensities are heavily weighted and their distribution is proportionately broad (Figure 3F). Components with this spectrum of distribution widths all combine to give the mixture distribution heavy tails (Figures 3B3D). 
Joint distributions of wavelet coefficients
Within natural images, the feature distribution for two orthogonal, colocalized wavelets has diamond-shaped contours (Figure 4A). Densely sampled dead leaves images reveal the same diamond contours (Figure 4B). For both natural images and dead leaves images, the distinctive non-Gaussian structure is most visible for contours at large feature amplitude. In this limit, the mixture components with greatest likelihood dominate the distribution, and the most likely component at high amplitudes is the one with the greatest variance in the given feature direction. The greatest variance for a single Haar wavelet occurs when a boundary between two objects aligns with the boundary between oppositely signed lobes, because that minimizes cancellation and maximizes the overlap each object makes with each lobe. However, this arrangement gives a minimal variance for the orthogonal wavelet at the same location. As the object boundary rotates (Figure 4C), the overlap with one wavelet reduces by exactly the amount that the overlap increases for the orthogonal wavelet. Since a mixture component's width is proportional to the overlap, this perfect trade-off gives the maximum likelihood contours of the observed diamond shape (Methods section, Figure 4D). 2  
Figure 4
 
Joint feature probabilities log P(f1, f2) for wavelet pairs f1 and f2. For orthogonal, colocalized Haar wavelets (inset of (A), shifted for visibility), the contours of the empirically sampled bivariate distribution are diamond-shaped for both (A) natural images and (B) dead leaves images. At high feature amplitudes, certain object configurations have the greatest likelihood and thus dominate the joint distribution. Panel (C) illustrates one such configuration. Colors indicate different objects with unspecified intensities. Dark and light shadings show how the two wavelets weight the image pixels. (D) Specifying only the object geometry (but not the object intensities) gives conditional feature distributions P(f1, f2Image not available ) that are bivariate Gaussians with elliptical contours. For the conditional distributions that dominate at high feature amplitude, the contours trace a diamond-shaped envelope (thick curve) as a function of the relative angles between the object boundary and wavelet orientations (Averaging over image patches section). Parallel, neighboring wavelets (inset of (E)) are anticorrelated, with joint probability contours exhibiting a similar pillow shape in (E) natural images and (F) dead leaves images. Panels (G) and (H) illustrate object configurations that dominate at large feature amplitudes, colored as in (C). (G) If one object covers the opposite-sign lobes of neighboring wavelets while others prevent cancellation by the negative lobe, then the conditional feature distribution will have a negative correlation. (H) Similarly, if one object covers the same-signed lobes of both features while another object prevents cancellation, then the conditional distribution will have a positive correlation. Configurations like this are much less probable than those like (G), because the middle object must have precisely the right size and position. (I) An ensemble of configurations like (G) produce negatively correlated components (gray ellipses) that vary depending on how precisely the objects cover the feature lobes. The positively correlated components (dashed ellipse) caused by configurations like (H) are many times less probable. Discounting the latter gives the mixture distribution an overall negative correlation, leaving components that trace out the pillow-shaped envelope (thick curve) seen in feature distribution contours (Methods section).
Figure 4
 
Joint feature probabilities log P(f1, f2) for wavelet pairs f1 and f2. For orthogonal, colocalized Haar wavelets (inset of (A), shifted for visibility), the contours of the empirically sampled bivariate distribution are diamond-shaped for both (A) natural images and (B) dead leaves images. At high feature amplitudes, certain object configurations have the greatest likelihood and thus dominate the joint distribution. Panel (C) illustrates one such configuration. Colors indicate different objects with unspecified intensities. Dark and light shadings show how the two wavelets weight the image pixels. (D) Specifying only the object geometry (but not the object intensities) gives conditional feature distributions P(f1, f2Image not available ) that are bivariate Gaussians with elliptical contours. For the conditional distributions that dominate at high feature amplitude, the contours trace a diamond-shaped envelope (thick curve) as a function of the relative angles between the object boundary and wavelet orientations (Averaging over image patches section). Parallel, neighboring wavelets (inset of (E)) are anticorrelated, with joint probability contours exhibiting a similar pillow shape in (E) natural images and (F) dead leaves images. Panels (G) and (H) illustrate object configurations that dominate at large feature amplitudes, colored as in (C). (G) If one object covers the opposite-sign lobes of neighboring wavelets while others prevent cancellation by the negative lobe, then the conditional feature distribution will have a negative correlation. (H) Similarly, if one object covers the same-signed lobes of both features while another object prevents cancellation, then the conditional distribution will have a positive correlation. Configurations like this are much less probable than those like (G), because the middle object must have precisely the right size and position. (I) An ensemble of configurations like (G) produce negatively correlated components (gray ellipses) that vary depending on how precisely the objects cover the feature lobes. The positively correlated components (dashed ellipse) caused by configurations like (H) are many times less probable. Discounting the latter gives the mixture distribution an overall negative correlation, leaving components that trace out the pillow-shaped envelope (thick curve) seen in feature distribution contours (Methods section).
For neighboring Haar wavelets with the same orientation, there is once again a remarkable similarity between pillow-shaped distributions measured for natural images and dead leaves images (Figures 4E and 4F). These too can be explained using simple arguments about the object geometry that dominates at high feature amplitudes. Negatively correlated mixture components occur when an object overlaps neighboring lobes on neighboring filters (Figure 4G). Positively correlated components occur when an object covers the same-signed lobes of both Haar wavelets without cancellation by the intervening lobe of opposite sign. This can only happen if a small object occludes that oppositely signed lobe (Figure 4H). Since a very limited variety of sizes and positions can accommodate this configuration, the positively correlated mixture components have much lower weights. Figure 4I shows mixture components with negative correlations, from which emerge the basic pillow shape of the full bivariate feature distributions (Methods section). 
Conditional distributions of wavelet coefficients
Wavelet coefficients in natural scenes may be nearly decorrelated to second order yet still have a strong statistical dependency taking the form of a “bowtie”-shaped distribution of one filter coefficient conditioned upon another (Figure 5A; Buccigrossi & Simoncelli, 1999). The dead leaves model reproduces this behavior (Figure 5B) and allows us to interpret it as well. 
Figure 5
 
“Bowtie” shapes appear in empirically sampled conditional feature distributions P(f2f1) for both (A) natural images and (B) dead leaves images. Horizontal and vertical axes represent the coefficients of two neighboring, orthogonal Haar wavelet filters, f1 and f2 (inset of (A)). The grayscale is normalized so black represents 0 and white is the maximum probability for a given f1. (C, D) Two equally probable object configurations, colored as in Figure 4C, have identical f1 but opposite f2. Both features are proportional to the intensity difference between foreground and background objects. (E) Conditional feature distribution with only four samples per feature (inset). Traces of the limited sampling appear as the faint diagonal bands passing through the origin (highlighted with dotted lines on right half). Each distinct band corresponds to a conditional distribution given a different object membership function, P(f2f1, Image not available ). Symmetry ensures that there will be no linear correlation between the two features, even as the width of P(f2f1) increases with ∣f1∣. With features sampled more densely, more such diagonal bands appear, until the bands blend together (B). This produces the distinctive bowtie shape in the conditional feature distributions.
Figure 5
 
“Bowtie” shapes appear in empirically sampled conditional feature distributions P(f2f1) for both (A) natural images and (B) dead leaves images. Horizontal and vertical axes represent the coefficients of two neighboring, orthogonal Haar wavelet filters, f1 and f2 (inset of (A)). The grayscale is normalized so black represents 0 and white is the maximum probability for a given f1. (C, D) Two equally probable object configurations, colored as in Figure 4C, have identical f1 but opposite f2. Both features are proportional to the intensity difference between foreground and background objects. (E) Conditional feature distribution with only four samples per feature (inset). Traces of the limited sampling appear as the faint diagonal bands passing through the origin (highlighted with dotted lines on right half). Each distinct band corresponds to a conditional distribution given a different object membership function, P(f2f1, Image not available ). Symmetry ensures that there will be no linear correlation between the two features, even as the width of P(f2f1) increases with ∣f1∣. With features sampled more densely, more such diagonal bands appear, until the bands blend together (B). This produces the distinctive bowtie shape in the conditional feature distributions.
The distribution of intensities found within an object is narrower than the intensity distribution averaged over all objects. Consequently, when a wavelet filter lies across an object boundary, it typically yields a larger magnitude than the same filter applied wholly within a single object. Since object boundaries tend to extend across space, a second filter with different scale or orientation has an elevated probability of encountering the same edge. However, as Figures 5C5D illustrate, the relative sign and magnitude of the two feature amplitudes depends on how the object boundary overlaps the second filter. In this symmetric example, positive and negative feature amplitudes are equally probable, so the conditional distribution P(f 2f 1) broadens with ∣f 1∣ without any change in the mean (Figure 5E). This explains why the variability in one feature amplitude increases with the amplitude of a nearby feature. 
Shared causes of edges
A major advantage of using the dead leaves model is that the causes of image features—objects and their attributes—are represented explicitly. Our results relate these causes to each other as well as to the observable, pixel-based image features. 
In natural images, edge pairs tend to fall tangent to circles passing through both edge locations (Chow, Jin, & Treves, 2002; Sigman, Cecchi, Gilbert, & Magnasco, 2001). Geisler and Perry (2009) and Geisler et al. (2001) augmented such an analysis with global information about physical contours, by laboriously hand-segmenting objects within many images of foliage. The likelihood that two edges share a physical cause (Figure 6A)—i.e., belong to the same contour—was highly predictive of human judgments of whether the edges had a shared cause. 
Figure 6
 
Joint statistics of local edges and global contours. (A) The likelihood ratio that edge pairs in natural images are caused by a common object versus by different objects (replotted from Geisler et al., 2001 with permission). For test edges at many distances, directions, and orientations relative to a reference edge (horizontal bar at origin), line segments are colored to indicate the likelihood ratio (Method section). The segments are sorted so those indicating high likelihoods appear in front. Concentric white rings correspond to unsampled distances. In the dead leaves model, we can define the corresponding likelihood in one of two ways. First, a pair of edges could have a “shared cause” if at least one side of each edge samples from the same object. The resultant likelihood is shown in (B) and an example of a shared cause is shown in (C). Second, we may add a depth constraint to better describe the existence of a shared contour: this shared object must also be on top of the other objects. Using this second definition, (D) shows the likelihoods and (E) gives a sample configuration. These likelihood ratios are on average 60 times lower than in (B) and reveal a pattern very similar to the observations made in natural images (A).
Figure 6
 
Joint statistics of local edges and global contours. (A) The likelihood ratio that edge pairs in natural images are caused by a common object versus by different objects (replotted from Geisler et al., 2001 with permission). For test edges at many distances, directions, and orientations relative to a reference edge (horizontal bar at origin), line segments are colored to indicate the likelihood ratio (Method section). The segments are sorted so those indicating high likelihoods appear in front. Concentric white rings correspond to unsampled distances. In the dead leaves model, we can define the corresponding likelihood in one of two ways. First, a pair of edges could have a “shared cause” if at least one side of each edge samples from the same object. The resultant likelihood is shown in (B) and an example of a shared cause is shown in (C). Second, we may add a depth constraint to better describe the existence of a shared contour: this shared object must also be on top of the other objects. Using this second definition, (D) shows the likelihoods and (E) gives a sample configuration. These likelihood ratios are on average 60 times lower than in (B) and reveal a pattern very similar to the observations made in natural images (A).
The dead leaves model can provide a mathematical “ground truth” for such calculations. First, we select a shape ensemble with a relatively large minimum object size (Methods section) to match the low-clutter conditions produced by the hand segmentation. Second, we represent individual edge features by an object membership function that divides four pixels into two pairs (Figure 11A). Note that this definition of edges uses the object membership directly, instead of inferring object boundaries from image pixels. Third, we define the conditions under which a pair of edges has the same physical cause. Fourth, for edge pairs with various geometrical relationships (Figure 11B), we plot the likelihood ratio under the hypotheses of a shared cause versus different causes (Methods section). 
A seemingly natural condition would identify a shared cause when there exists an object that participates in both edges. The resultant likelihood ratio always favors a shared cause, for all relative positions and orientations of the edge pair (Figure 6B), completely at odds with reported statistics (Figure 6A; Geisler & Perry, 2009; Geisler et al., 2001). The reason can be seen in Figure 6C: An object could be shared across two edges simply if it is a common background for two distinct objects. Thus, this definition, only involving object identity on both sides of an edge, is inadequate to reproduce the observed edge statistics. 
A more sensible pattern emerges by modifying the definition of common cause to include relative depth, assigning “border ownership” (Zhou, Friedman, & von der Heydt, 2000) to the local edge. We now define a common cause to exist when a single object participates in both edges and is closer to the viewer than the other objects seen at these edges. An example of this configuration is seen in Figure 6E, which agrees with our intuition of how an object contour can be a shared cause for two edges. Application of this definition requires more than the object membership function, since Image not available reflects only the grouping of pixels into objects and does not represent any information about relative depth. We can, however, augment the object membership function with this extra information, defining an ordered object membership function Image not available. (This Hebrew letter, final mem, appears only at the end of words and is used here to indicate that object order matters.) Whereas Image not available was a set of subsets, Image not available is an ordered set of subsets, with Image not availablen representing the pixels contained by the nth highest object sampled by any of the N selected pixels. For instance, Figure 1C depicts the ordered set Image not available = {{x3},{x4,x5},{x1,x2,x6}}, which we abbreviate as {3 > 45 > 126}. The probabilities of these ordered object membership functions can be calculated by a very similar recursion equation as that used for the unordered variant (Methods section). Figure 6D plots likelihood ratios using a new definition of shared cause, which includes depth (Table 1), in which a common cause occurs whenever a single object is on top at both edges. The results show that certain edges are more likely to have a common cause, whereas other edges are more likely to be independent. The pattern closely resembles results of Geisler and Perry (2009) and Geisler et al. (2001; Figure 6A). Since those statistics were predictive of human judgments about contour completion across occluders, the dead leaves model also qualitatively predicts human inference about such ambiguous stimuli. 
Table 1
 
Object membership functions used for joint edge statistics. For compactness, we represent object membership functions by the pixel indices divided symbolically into ordered or unordered groups. For example, {{x1,x2},{x3,x4}} is written as 12 ∣ 34 if unordered, and as 12 > 34 if ordered such that the object containing points x1 and x2 lies above the object containing x3 and x4. These object membership functions are classified according to whether they reflect a shared cause or different causes for the two edges, using (A) unordered or (B) ordered representations.
Table 1
 
Object membership functions used for joint edge statistics. For compactness, we represent object membership functions by the pixel indices divided symbolically into ordered or unordered groups. For example, {{x1,x2},{x3,x4}} is written as 12 ∣ 34 if unordered, and as 12 > 34 if ordered such that the object containing points x1 and x2 lies above the object containing x3 and x4. These object membership functions are classified according to whether they reflect a shared cause or different causes for the two edges, using (A) unordered or (B) ordered representations.
(A) Classification of unordered Image not available
S: Shared causeD: Different causes
1256 ∣ 34 ∣ 7812 ∣ 34 ∣ 56 ∣ 78
1278 ∣ 34 ∣ 56
12 ∣ 56 ∣ 3478
12 ∣ 78 ∣ 3456
1256 ∣ 3478
1278 ∣ 3456
(B) Classification of ordered Image not available
S: Shared causeD: Different causes
1256 > 34 ∣ 7834 ∣ 78 > 125634 > 1256 > 7878 > 1256 > 34
3478 > 12 ∣ 5612 ∣ 56 > 347812 > 3478 > 5656 > 3478 > 12
1278 > 34 ∣ 5634 ∣ 56 > 127834 > 1278 > 5656 > 1278 > 34
3456 > 12 ∣ 7812 ∣ 78 > 345612 > 3456 > 7878 > 3456 > 12
1256 ∣ 347812 ∣ 34 ∣ 56 ∣ 78
1278 ∣ 3456
Relaxing the model assumptions
The preceding sections exploited a number of assumptions about the ensemble of sizes, shapes, and textures. Here we address how relaxing those assumptions influences the results. 
Size ensemble
The feature distributions calculated above all assumed an inverse-cubic size ensemble, which was chosen to produce the scale invariance that is a well-established feature of natural scenes (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). As we show below, occlusion alone is not enough to reproduce the natural feature distributions, but occlusion with scale invariance is sufficient. 
The heavy-tailed marginal wavelet distributions (Figure 3) were generated as sums of a random number of independent causes, namely the areas of various objects visible within each pixel. One might expect that any size ensemble will give sums with a random number of terms and thus should produce similar tails. However, to obtain the nearly exponential feature distribution seen in natural images, one requires that the number of terms be broadly distributed, and this is not true for arbitrary size ensembles. An inverse-square size distribution favors large objects and rare edges. The number of independent contributions to the sum is then reliably small—usually just zero or two—so the marginal distributions have a very high peak with occasional large deviations (Figure 7A). An inverse-quartic size distribution favors many small objects. The number of independent components is then large but fairly consistent, and the sums are much closer to Gaussian due to the usual central limit theorem for a fixed number of summands (Figure 7C). Between these extremes, the inverse-cubic size ensemble gives the broadly distributed number of summands needed for the generalized central limit theorem to apply, and it consequently reproduces the nearly exponential tails seen for natural images (Figure 7B). 
Figure 7
 
Comparison of feature distributions for the dead leaves model using different size distributions. The columns show sample images, marginal distributions of Haar wavelets, joint distributions of orthogonal and parallel wavelets, conditional distributions of wavelets, and contour likelihoods, each displayed as in Figures 36. (A–C) Objects are drawn from power-law size ensembles with exponents −2 to −4.
Figure 7
 
Comparison of feature distributions for the dead leaves model using different size distributions. The columns show sample images, marginal distributions of Haar wavelets, joint distributions of orthogonal and parallel wavelets, conditional distributions of wavelets, and contour likelihoods, each displayed as in Figures 36. (A–C) Objects are drawn from power-law size ensembles with exponents −2 to −4.
The diamond- and pillow-shaped contours of the joint distributions (Figure 4) at high amplitude were explained by configurations involving only a few object boundaries. These configurations must have non-negligible probability for those arguments to hold, but beyond that minimal requirement the reasoning does not depend on the precise distribution of object sizes. Nonetheless, one might need to look at very high feature amplitudes to see these predicted contour shapes: Dead leaves models with inverse-square and inverse-quartic size distributions produce very different joint distribution contours for those feature amplitudes with any appreciable probability (Figures 7A and 7C). Contrast this with the inverse-cubic size distribution, for which the diamond and pillow contour shapes appear already at lower amplitudes (Figure 7B). We attribute this property to scale invariance, which requires that filters of different spatial extents give identically shaped feature distributions up to a scale factor. Correspondingly, since the large filters can be decomposed into smaller ones, a feature distribution at large amplitudes should also resemble that same distribution at smaller amplitudes. 
The bowtie-shaped conditional distributions (Figure 5) were caused by object configurations that gave proportional responses from both wavelet filters. The proportionality constant depended on the geometry of the objects and was smaller for more curved objects that tend to overlap less with the second filter. Consequently, the bowties are stronger for ensembles with more large objects (Figure 7A) and weaker for ensembles with more small objects (Figure 7C), with the most natural shapes falling in the middle (Figures 5 and 7B). 
In contrast with the wavelet feature distributions, the contour likelihoods seen in Figure 6 have only a very weak dependence on the size ensemble (Figures 7A7C). This is because we imposed a large minimum size to match the low-clutter conditions used experimentally (Geisler et al., 2001). For monotonically decreasing distributions, the most probable size is this minimum. Consequently, power-law size ensembles all possess similar local edge and curvature statistics (August & Zucker, 2000), which ultimately determine the contour likelihood ratios. If we instead allow a small minimum object size, then the ensembles exhibit much greater differences: The inverse-cubic and -quartic distributions yield edge pairs that are almost always from different contours (data not shown). Only the inverse-square distribution has sufficiently low clutter to reproduce the experimental observations of Figure 6A
Shape ensemble
Since the size ensemble has such important effects on feature distributions, we should consider also the effects of shape. Most real objects have more elaborate shapes than the ellipses used in these calculations. Notably, the most common edge configuration seen in natural scenes is consistent with circular (Sigman et al., 2001), elliptical, or parabolic (Geisler & Perry, 2009) arcs. This accounts for why the elliptical object ensemble could reproduce statistics of images populated by complex, natural objects. 
Still, incorporating more complex objects may correct some minor discrepancies between the dead leaves model and natural scenes. In Figure 6, for instance, while all edges to the side of a reference edge are unlikely to share a cause, the least unlikely ones are parallel in the model but perpendicular in natural images. This detail depends on the particular shape ensemble. Elliptical objects have no edge pairs arranged in a T, whereas more complex shapes with inflection points might. Simulations with non-convex shapes (dumbbells, rings, random Bezier curves) do show such differences in details of their likelihood ratios (data not shown). However, there is no change in the basic trend, which favors common causes for edges with similar orientations and headings but otherwise favors different causes. 
Texture ensemble
How does the texture distribution affect the feature statistics? A uniform Gaussian texture with superposed low-amplitude, white noise was chosen to isolate the effects of occlusion, but this texture is not very natural. On the other hand, natural textures are dominated by the lowest frequency components, and over the filter length scales those components will manifest as an effective mean luminance, which is already reflected in the model. High-frequency components are weaker and, furthermore, largely cancel out when integrated over a scale larger than the wavelength. These components will then generally appear as local noise added to the features. This noise just blurs the feature distributions by an amount proportional to the noise amplitude. As the texture length scale increases, there is a concomitant increase in the non-Gaussianity of the distributions (data not shown). Simulations with the range of mottled, relatively large-scale textures used in Figure 1B leave the non-Gaussian properties unchanged (data not shown). 
Natural textures have correlations across scales, often due to the presence of edges. Under those conditions, the effects of natural textures should be similar to the effects of multiple object boundaries within the filters. Accordingly, statistics of natural, purely textural images (Pickard, Graszyk, Mann, Wachman, & Pickard, 1995) can have joint distributions that resemble those obtained from the dead leaves model. Feature distributions in textures with abundant edges appear less Gaussian, and those for smoother textures appear more Gaussian (data not shown). Typical natural textures thus have little effect on the measured statistics, beyond that already captured by the dead leaves model. 
Discussion
This study presented and then solved the dead leaves model, a simplified model of natural images for which occlusion is the central property. The solution consists of exact probability distributions that relate arbitrary image features to each other and to the depicted objects. By applying and analyzing this solution, we were able to account for several curious observations about natural image statistics. 
The solution of the model was made possible by connecting image features to object configurations through the object membership function Image not available . This representation enables probability distributions to be decomposed into a mixture of simpler distributions. The existence of a mixture distribution for the dead leaves model was first proved in Bordenave, Gousseau, and Roueff (2006) and Gousseau and Roueff (2007). Here we found an explicit solution for the mixture components that yield concrete numbers used in various applications. Additionally, this solution generalizes to give probabilistic relationships among all model variables (Generalizations section), including object texture, size, shape, position, and depth. The ability to relate arbitrary image features and many diverse object attributes in a principled manner is a substantial advance over previous efforts. 
We used these results to examine the distributions of wavelet features in the model and natural images. For these applications, we gave every object a Gaussian-distributed uniform intensity with a small additive Gaussian white noise. This ensured that all non-Gaussian statistics were purely effects of occlusion. The resultant dead leaves model could reproduce the distributions of marginal, joint, and conditional wavelet coefficients observed in natural images. The nearly exponential tails of the marginal distributions were a consequence of integrating over unresolved details (Figure 3). The diamond-shaped joint distribution of orthogonal, colocalized wavelets occurs because an edge aligned well with one wavelet must be aligned poorly with an orthogonal wavelet (Figure 4). The pillow-shaped joint distribution of parallel wavelets reflects the rarity with which objects can induce positive correlation by squeezing precisely into one wavelet lobe (Figure 4). Bowtie-shaped conditional distributions arise because extended object boundaries can overlap wavelets with proportional amplitudes and equal or opposite signs (Figure 5). Each of these distributions was most natural when scale invariance was imposed by an inverse-cubic size ensemble (Figure 7). Finally, we found that accurately computing the likelihood that two edges are part of the same contour depends critically on ascribing relative depth to the edges (Figure 6). Table 2 summarizes the assumptions needed for these calculations. Overall, the unifying idea is that simple geometric configurations of occluding objects are sufficient to explain seemingly complex statistics of edge features. 
Table 2
 
Summary of assumptions used in calculations and their purposes.
Table 2
 
Summary of assumptions used in calculations and their purposes.
AssumptionPurpose
General
   OcclusionImprove realism of model images
   Object independenceSolve general dead leaves model
Shape
   Elliptical shapesFind exact membership probabilities P( Image not available )
   Inverse-cubic size distributionApproximate scale invariance
   Minimum size bigger/smaller than image featureSimulate low-/high-clutter images
Texture
   Gaussian white noise textureSimplify conditional intensities P(IImage not available )
   Low-contrast textureEmphasize consequences of occlusion
Beyond the dead leaves model
Although the dead leaves model is sufficient to cause the observations presented above, it is not the sole process that could do so. As one striking example, the cratered lunar surface appears remarkably similar to dead leaves images (Stuart-Alexander, 1978). Even though the causal process is entirely different from occlusion, the essential properties are identical: New impacts locally erase traces of previous impacts, and small craters are much more common than large ones. Similar principles may approximate other physical processes as well, such as those that determine surface composition or some three-dimensional bump textures. The results presented here should pertain to feature statistics caused by any such “exclusion” process. 
The results of the Relaxing the model assumptions section suggest more general causes for the observed statistical properties: sparse edges and scale invariance. In the dead leaves model, occlusion by extended contours automatically guarantees sparse edges, and the inverse-cubic size distribution ensures scale invariance. Other models may also have these properties, even without occlusion. For instance, in a variant of the dead leaves process one may combine an infinite stack of objects with some transparency. With transparency of 50%, this variant keeps some of the original non-Gaussianity (Figure 8B), since there remains enough opacity to ensure that edges are sharp and only a few objects determine the image value at each point. As the opacity decreases, more objects sum together and eventually wash out all distinct edges (Figure 8C). In the limit of very low opacity, the model becomes a linear sum of many objects and converges to a Gaussian process. One can avoid this unnatural Gaussian limit even in a linear model by imposing sparseness by fiat, allowing only a small number of objects rather than a large or infinite stack (Grenander & Srivastava, 2001). With too few or too many objects, one sees distributions like those in Figure 7A or 8C, respectively, but with just the right number of objects, one can approximate the natural statistics (data not shown). 
Figure 8
 
Feature distributions for the dead leaves model with only partially opaque objects, shown as in Figure 7. (A–C) Objects have opacities of 1.0, 0.5, and 0.1.
Figure 8
 
Feature distributions for the dead leaves model with only partially opaque objects, shown as in Figure 7. (A–C) Objects have opacities of 1.0, 0.5, and 0.1.
Although a linear model with sparseness constraints is capable of matching edge statistics, it will be unable to account for higher order features like T-junctions (Mumford & Gidas, 2001), which are known to be highly informative elements for visual perception (Hoffman, 1998). Occlusion describes such features naturally, and it will be interesting to apply the general solution of the dead leaves model to study these higher order statistics. 
Despite the dead leaves model's success at reproducing many complex natural statistics, we expect some statistical differences also. Indeed, whereas natural scenes appear reasonably Gaussian after normalizing intensities by the local standard deviation (Ruderman & Bialek, 1994; Wainwright & Simoncelli, 2000), dead leaves images do not have this property. This therefore excludes object boundaries as the cause, despite speculations to the contrary (van Hateren, 1997). By extending the model in various ways, one may hope to capture this and other natural image properties and thereby reveal their underlying causes. 
One way to improve the realism of the dead leaves model is by adding correlations between model variables. For instance, light sources could be modeled by modulating texture according to position within each object and adding intensity correlations between objects. Rudimentary three-dimensional shape could be included using textures to indicate object tilt (Saunders & Knill, 2001). Perspective could be modeled by covarying size with depth. The model has been expanded to true three-dimensionality by giving objects a depth coordinate rather than merely a sequential ordering, and some depth statistics are then calculable (Langer, 2008). It may also be possible to compute the probabilistic relationships between model object depths and depth cues like binocular disparity. Images with such improvements could be easily simulated, but in some cases, a modified solution for the enhanced model would be required. 
Toward neural coding of natural scenes
Some perceptual tasks can be accurately modeled as inference based on simple models of stimulus probabilities (Battaglia, Jacobs, & Aslin, 2003; Ernst & Banks, 2002; Howe & Purves, 2005; Kording, Ku, & Wolpert, 2004; Stocker & Simoncelli, 2006). Human perception of images appears biased toward statistically probable features of the dead leaves model. For example, empirical edge statistics predict psychophysical judgments about whether two edges have a common cause (Geisler et al., 2001), and the dead leaves model reproduces these statistics. Artificial neural networks trained on dead leaves images make systematic interpretation errors that are consistent with illusory percepts in humans (Corney & Lotto, 2007). Such evidence hints that these percepts might result from perceptual inference using probabilities described by the dead leaves model. 
On a more mechanistic level, some electrophysiological recordings of individual neurons in animal cortex appear consistent with a probabilistic weighing of sense data (Ma, Beck, Latham, & Pouget, 2006; Mazurek, Roitman, Ditterich, & Shadlen, 2003; Murray, Kersten, Olshausen, Schrater, & Woods, 2002). We might speculate that some cortical neurons could be tuned to encode feature probabilities. For instance, complex cells in primary visual cortex are excited by edges irrespective of polarity and precise location of those edges (Hubel & Wiesel, 1962) and are especially sensitive to phase alignment caused frequently by object boundaries in natural images (Felsen, Touryan, Han, & Dan, 2005). We might therefore wish to describe a rudimentary complex cell as encoding the probability that an edge passes through two points in its receptive field, irrespective of which side of the edge is brighter. In our formalism, this corresponds to an object membership function Image not availableedge = {1∣2}. Assuming that objects have Gaussian distribution intensities and the image sensors have some additive Gaussian noise, the probability of an edge given the intensity difference Δ across space is P(Image not availableedge∣Δ) = [1 + k exp(−βΔ2)]−1, where k and β are positive constants that depend on the spatial scale, overall image contrast, and sensor noise. This function resembles the contrast energy model of complex cells (Adelson & Bergen, 1985) with a saturating non-linearity. Thus, we might interpret complex cell activity as encoding the probability of a local edge in a world of objects. It will be interesting to explore such a model more thoroughly and to see if other neurons have properties that map nicely onto representations of still more complex features within the dead leaves model. Since synaptic connections are modified by neural correlations, and the occlusion model explains stimulus correlations, the model may also help generate predictions about cortical circuitry that has matured in the natural world. 
In vision science, progress has been made by finding stimuli appropriate for the area of study (Rust & Movshon, 2005). The best stimulus is one that contains a rich repertory of the right kinds of features, while limiting extraneous detail. Since the dead leaves model shares many low- and mid-complexity features with the natural environment while simplifying some higher level features, it seems like an especially good stimulus to use in experiments that probe the mechanisms of low- and mid-level vision. It strikes a good balance between tractability, accuracy, and richness, by isolating two causes of image features, which must be disambiguated to interpret truly natural scenes: occlusion and texture. The availability of an exact solution for the relevant probabilities is a promising new ingredient for experimental and theoretical studies of visual function. 
Methods
Dead leaves membership probabilities
Equation 3 expresses the object membership probabilities P(Image not available) in terms of some geometric factors Qσ(Image not available,n). These factors represent the probability that points xiImage not availablen are included in one object while the other points xiImage not available\n are not, averaged over all object positions and shapes. For convenience, we name these Qσ “inclusion probabilities.” Note that these quantities involve the geometry of single objects only; the recursion of Equation 3 converts them into the multiobject probabilities P(Image not available) that characterize the dead leaves model geometry. In this section, we show how the inclusion probabilities can be calculated for arbitrary objects. 
We begin by specifying a shape through a “leaf” function Lσ(x, ρ), which is an indicator function over space x and shape parameter(s) ρ. The function can indicate either the inside or the outside of an object centered on the origin, depending on the binary variable σ ∈ {0,1}: Lσ(x,ρ) equals σ when pixel x is inside the object and 1 − σ when x is outside it (Figure 9A). With this definition, is the inclusion probability that a leaf with shape ρ and location c includes all sample points xiImage not availablen and excludes all remaining xiImage not available\n (Figure 9B). 
Figure 9
 
Diagrams illustrating inclusion probabilities Qσ( Image not available , n). (A) A “leaf” function showing the shape of an object. L1(x,ρ) = 1 at points x that are inside an object of shape parameter ρ, and L0(x,ρ) = 1 at points outside it. Here the shape parameter ρ specifies a smooth irregular object. (B) Sample indicator functions Qσ( Image not available , n)(c,ρ) identify location c where an object could be placed to enclose all pixels xiImage not availablen and exclude the rest. Rotated copies of the object shape surround each point xi, designating the location c where that object will enclose xi. An arrow points to the shaded region where an object could be placed to enclose both x1 and x3 but not x2, whose area is Q101(ρ). The other shaded region indicates locations where an object would enclose only x2, whose area is Q010(ρ). Note that this diagram represents possible location c of a single object, not three objects! (C) The indicator functions change shape in a complex manner as a function of the shape parameter ρ, as illustrated for the outlined region shown at four different object sizes. This makes it challenging to integrate their areas over the shape ensemble to obtain the inclusion probabilities Qσ( Image not available , n).
Figure 9
 
Diagrams illustrating inclusion probabilities Qσ( Image not available , n). (A) A “leaf” function showing the shape of an object. L1(x,ρ) = 1 at points x that are inside an object of shape parameter ρ, and L0(x,ρ) = 1 at points outside it. Here the shape parameter ρ specifies a smooth irregular object. (B) Sample indicator functions Qσ( Image not available , n)(c,ρ) identify location c where an object could be placed to enclose all pixels xiImage not availablen and exclude the rest. Rotated copies of the object shape surround each point xi, designating the location c where that object will enclose xi. An arrow points to the shaded region where an object could be placed to enclose both x1 and x3 but not x2, whose area is Q101(ρ). The other shaded region indicates locations where an object would enclose only x2, whose area is Q010(ρ). Note that this diagram represents possible location c of a single object, not three objects! (C) The indicator functions change shape in a complex manner as a function of the shape parameter ρ, as illustrated for the outlined region shown at four different object sizes. This makes it challenging to integrate their areas over the shape ensemble to obtain the inclusion probabilities Qσ( Image not available , n).
The inclusion probabilities Q σ in Equation 3 are averages over all possible object shapes and positions. Thus, we are interested in the average of Q σ (c,ρ) over the distribution of leaf positions P(c) and shapes P(ρ): 
Q σ = d ρ P ( ρ ) Q σ ( ρ ) = d ρ d c P ( ρ ) P ( c ) Q σ ( c , ρ ) .
(5)
We first perform the average over object positions c to obtain Q σ (ρ), and subsequently calculate the average over object shape ρ
In the dead leaves model, objects are distributed with uniform probability across space. For simplicity, we also assume wraparound boundary conditions and with no loss of generality require that no object is larger than the image to avoid self-intersections. (We can allow larger objects by choosing a small window into the dead leaves world to represent our image; objects may be larger than the window but smaller than the entire model world.) By scaling distance so the image has unit area, we have P(c) = 1 and the c-integral of binary-valued Q σ (c,ρ) gives the inclusion probabilities for a given ρ as the areas of the regions with constant Q σ (c,ρ). 
Direct integration is not straightforward even for simple object shapes because these regions generally have complicated two-dimensional limits (Figure 9B) and may even not be simply connected. However, using the divergence theorem, we can transform this area integral into a simpler contour integral that follows object boundaries piecewise. The vector field V =
1 2
c has divergence (in c-space) of ∇ · V = 1, so integrating this divergence over the desired region gives the enclosed area. The divergence theorem says that this integral equals the flux of V across the region boundary: 
Q σ ( ρ ) = d c P ( c ) Q σ ( c , ρ ) = C · V d c = C V · n ^ d s ,
(6)
where C is the region in c-space where Q σ (c, ρ) = 1, ∂C is its boundary,
n ^
is the unit normal vector to the boundary, and ds is the arc length. The boundary is composed of piecewise smooth segments of the object outline centered on the sample points x i (Figure 9B). We index the relevant segments by mM and represent the curves by s m (t): t m ′ < t < t m ″ for t between the cusps at which the contour changes direction abruptly. The integral along each segment is then 
A m = 1 2 t m t m s m ( t ) · n ^ m ( t ) d t ,
(7)
and the complete contour integral is a sum over segments Q σ (ρ) =
m M
A m
To average Q σ (ρ) over the shape ensemble P(ρ), we need to compute ∫Q σ (ρ)P(ρ). Note that the set of piecewise smooth segments composing the contour ∂C may change depending on ρ (Figure 9C), so the ρ-integral must itself be done piecewise. We define an index ℓ specifying the regions R in ρ-space where a given set of segments M composes the contour. Within R ℓ, the integral over ρ can then be carried out on each summand A m separately, yielding 
Q σ = m M R d ρ P ( ρ ) A m ( ρ ) .
(8)
 
Carrying out this calculation explicitly, not just formally, requires some careful geometry. In the Supplementary material, we complete these calculations for an ensemble of elliptical objects with an inverse-cube power-law distribution of sizes. In principle, it is also possible to calculate all these probabilities exactly for various other shape ensembles with simple boundaries such as polygons, or compound objects comprising multiple circles. Other size ensembles can also be used. The mathematical techniques required to complete the calculations are essentially the same. 
For the figures presented in this paper, all objects were ellipses with uniformly distributed eccentricities between 1 and 4, uniformly distributed orientations, and an inverse-cube size distribution. Note that finite upper and lower bounds on object sizes are required for a well-behaved model that does not degenerate into either a uniform or white noise image (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). Approximate scale invariance is only valid within those bounds. Here we set the upper and lower bounds to r + = 100 and r = 1. For Figures 35, we used high-clutter conditions by setting the pixel spacing to 5r . For Figure 6, to replicate the relatively low-clutter conditions under which the natural image statistics were measured empirically (Geisler et al., 2001), we chose the pixel spacing to be r /5. 
Intensity and feature distributions
For simplicity, we assume that every object has a Gaussian-distributed mean intensity and an additive Gaussian white noise textural modulation with variances Ξ0 and Ξ1. For this texture ensemble, the conditional distribution of pixel intensities is P(IImage not available) ∝ exp(−
12
I
CImage not available−1I), with zero mean and covariance matrix given by (CImage not available)ij = Ξ0n=1Image not availableσi(Image not available,n)σj(Image not available,n) + Ξ1δij. In the results shown in this paper, Ξ0 = 1 and Ξ1 = 0.01. 
For features specified as linear combinations of intensities by f = FI, the conditional distribution is P(fImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not availableImage not available) ∝ exp(−
12
f
(FCImage not availableF
)−1f) and the joint probability is the mixture distribution P(f) = ∑Image not availableP(Image not available)P(fImage not available). 
Averaging over image patches
Pixels in natural images are integrals of light intensity over a finite solid angle. In the dead leaves model, we can approximate these spatial integrals by summing over multiple points within an image patch Xi, defining 
Ii=j:xjXiIj.
(9)
Using the white-noise texture model (Intensity and feature distributions section), the total intensity
I
i over an image patch has a conditional distribution P(
I
iImage not available), which is Gaussian with zero mean and variance Here CImage not available is the covariance matrix of all pixels in image patch Xi conditioned on the object membership function Image not available, and ∣Image not availablen∣ is the number of sampled pixels falling into the nth object. Thus, the variance increases with the square of the sampled area of each object and is maximized when only one object covers the sampling area. 
A Haar wavelet takes the difference H =
I
1
I
2 between sums
I
1 and
I
2 over two distinct regions (Figure 10A). The corresponding variance does not necessarily increase with the square of each object's sampled area, because some of the samples are weighted with opposite signs and thus cancel. The conditional covariance between two Haar wavelets Hi and Hj is where ∣Image not availablenk,i∣ is the number of samples in region k of wavelet I, which fall into the nth object (Figure 10A), and Nij is the number of samples shared by wavelets Hi and Hj
Figure 10
 
Simplified representations of object configurations that dominate feature distributions at high amplitudes. Colors indicate objects of unspecified intensity; shading indicates weighting by Haar wavelets. (A) A Haar wavelet Hi takes a difference of intensities I ― 1,i and I ― 2,i, each totaled over a finite region. The pixels Image not availablen2,i contained both in elliptical object Image not availablen and in region 2 of wavelet i are outlined. (B) Colocalized, orthogonal Haar wavelets with circular support. (C, D) Parallel, nearby Haar wavelets, with objects that induce negative and positive correlations, respectively. To simplify the calculations, objects differ only in their horizontal extent and extend completely to either the left or right edge of each wavelet. The relevant variable is then the width of the overlap between the object and the wavelet filter, denoted dl and dr. Compare these simplified configurations to those shown in Figures 4C, 4G, and 4H.
Figure 10
 
Simplified representations of object configurations that dominate feature distributions at high amplitudes. Colors indicate objects of unspecified intensity; shading indicates weighting by Haar wavelets. (A) A Haar wavelet Hi takes a difference of intensities I ― 1,i and I ― 2,i, each totaled over a finite region. The pixels Image not availablen2,i contained both in elliptical object Image not availablen and in region 2 of wavelet i are outlined. (B) Colocalized, orthogonal Haar wavelets with circular support. (C, D) Parallel, nearby Haar wavelets, with objects that induce negative and positive correlations, respectively. To simplify the calculations, objects differ only in their horizontal extent and extend completely to either the left or right edge of each wavelet. The relevant variable is then the width of the overlap between the object and the wavelet filter, denoted dl and dr. Compare these simplified configurations to those shown in Figures 4C, 4G, and 4H.
In Figures 4B and 4D, the diamond-shaped contours emerge as a consequence of Equation 11. Instead of the Haar wavelets with square support shown in that figure, it is simpler to understand the case with circular support (Figure 10B), though the result is the same. The maximum amplitude features occur when a single object boundary passes through the center of the wavelet at an angle θ. The covariance of the mixture distribution conditioned on this object configuration is 
C H H | θ = 2 N 2 Ξ 0 ( ( π 2 θ ) 2 2 θ ( π 2 θ ) 2 θ ( π 2 θ ) ( 2 θ ) 2 ) + N Ξ 1 1 .
(12)
For large N, this covariance matrix is nearly singular, with almost unity correlation coefficient between the variations along H 1 and H 2. Contours of the corresponding bivariate Gaussian have maximum extent at feature amplitudes proportional to (±θ, ±(
π 2
θ)). The envelope of these contours produces the diamond shown in Figures 4B and 4D
In Figure 4, two neighboring, parallel Haar wavelets have a joint distribution with a distinctive “pillow” shape. The dominant contributions at high feature amplitudes involve three objects as depicted in Figure 4G, one covering the left edge of the wavelet, a second one covering the right edge, and a third covering the gap between them. We can approximate this arrangement with a one-dimensional version, considering only the horizontal extent of objects (Figures 10C and 10D). If we denote how much the leftmost and rightmost objects overlap the wavelets by d l and d r, then the covariance of the mixture distribution is 
C H H | d l , d r = N 2 Ξ 0 ( 2 Δ l 2 ± Δ l Δ r ± Δ l Δ r 2 Δ r 2 ) + N Ξ 1 1 ,
(13)
where Δ i = min(d i ,1 − d i ) and the width of each lobe of the Haar wavelet is 1. These components all have a correlation coefficient of nearly ±1/2 but have different variances. By changing d l and d r for the configuration shown in Figure 10C, we obtain conditional distributions with the ensemble of contours seen in Figure 4I. Their envelope produces the pillow shape (Figure 4). 
Shared causes of edges
To define oriented edges, we select four pixels arranged in a rectangle and select only those object membership functions that bisect these four pixels into two pairs. Note that a range of object boundaries can produce such a separation. Giving the rectangle an aspect ratio of 2.75 constrains edges to an allowed range of orientations 2tan−1(1/2.75) = 40° (Figure 11A) that matches the orientation bandwidth used in Geisler et al. (2001). Pairs of edges are described by two such bisected four-pixel clusters (Figure 11B). This definition of edge pairs restricts these eight pixels to have one of only seven possible object membership functions (Table 1A). In one of these configurations, every pixel pair is a member of a different object: Image not available = {12∣34∣56∣78}. In the remaining configurations, at least two pairs are members of the same object (Figure 6D). This latter category serves as one possible definition of a “shared cause” for the two edges. 
Figure 11
 
Detailed geometry for Figure 6. (A) An edge exists when an object splits four pixels into two pairs. Pixels arranged in a rectangle with an aspect ratio of Δxy = 2.75 permit a range of edges with a 40° orientation bandwidth as used in Geisler et al. (2001). (B) Pairs of edges thus defined are related by three parameters: distance d, orientation difference θ, and relative direction φ.
Figure 11
 
Detailed geometry for Figure 6. (A) An edge exists when an object splits four pixels into two pairs. Pixels arranged in a rectangle with an aspect ratio of Δxy = 2.75 permit a range of edges with a 40° orientation bandwidth as used in Geisler et al. (2001). (B) Pairs of edges thus defined are related by three parameters: distance d, orientation difference θ, and relative direction φ.
A second definition of shared cause invokes not just the object membership but also the relative depth of the objects. In particular, we use ordered membership functions Image not available (Generalizations section), and we classify these Image not available according to whether a pair of pixels from each edge both falls into the same object and that object is above the object present at the remaining pixels (Figure 6E). The relevant Image not available are listed in Table 1B
With either definition, the likelihood ratio of shared cause to different cause is L = ∑Image not availableSP(Image not available)/∑Image not availableDP(Image not available), where S and D are the sets of membership functions categorized as shared or different causes, respectively. This likelihood ratio varies as a function of the positions and relative orientation of the two edge pairs (Figures 6C6D). 
Generalizations
We can calculate the relative depth of objects by using an ordered object membership function Image not available rather than an unordered membership function Image not available. The recursion in this case is even simpler than Equation 3: There is no summation here because there is only one term for which the first object is highest in the stack of objects. One may use a partial ordering if not all relative depths are of interest, and then there will be a sum over arrangements consistent with the partial ordering. 
Note that there are more hidden variables of interest besides the object membership and relative depth, and the joint probabilities of these can be calculated by a similar recursive formula, without marginalizing away the hidden variables. The joint distribution of shape and membership, for instance, can be calculated as where ρ is now a vector of N shape parameters, with ρ n indicating the shape parameters for the topmost object present at pixel location x n
Empirical sampling of dead leaves and natural images
For probabilities involving many image points, we generate many dead leaves images and empirically sample from them to obtain histograms. Images are produced by layering objects from front to back until all image pixels are members of some object, a process that yields stationary image statistics (Kendall & Thonnes, 1999). For images with transparency τ, visibility decays exponentially with depth. Objects are generated until each point is covered at least ten times the extinction coefficient, 10/logτ −1, which ensures that omitting deeper objects will give only a negligible error. The software generating dead leaves images is available as Supplementary material
Natural images were drawn from van Hateren's image database (van Hateren & van der Schaaf, 1998) or the VisTex database (Pickard et al., 1995). Feature distributions were obtained by log-transforming images (Lee et al., 2001), filtering them by the relevant Haar wavelets, and computing univariate or bivariate histograms. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Code for generating dead leaves images DeadLeavesImage
Code for calculating feature probabilities DeadLeavesProbabilities
Acknowledgments
The author would like to thank Ken Miller, Larry Abbott, Stefano Fusi, Taro Toyoizumi, Vladimir Itskov, and Tony Movshon for helpful comments and suggestions. This work was supported by the National Institute of Health Grant EY13933 and the Swartz Foundation. 
Commercial relationships: none. 
Corresponding author: Xaq Pitkow. 
Email: xaq@neurotheory.columbia.edu. 
Address: 1051 Riverside Dr., Unit 87, Kolb Research Annex, Room 773, New York, NY 10032, USA. 
Footnotes
Footnotes
1   1Technically, this theorem requires a geometric distribution for the number of summands. The observed distribution of the number of visible objects is not geometric, but it is similarly broad with a width of the same order as its mean, so a similar result should hold.
Footnotes
2   2The slight squashing of the diamond shape seen for natural images is a consequence of gravity: in this natural image database, there are more vertical contours than horizontal ones. Here the dead leaves model does not show this asymmetry because it is isotropic by construction.
References
Adelson E. Bergen J. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2, 284–299. [CrossRef]
Alvarez L. Gousseau Y. Morel J. (1999). Scales in natural images and a consequence on their bounded variation norm. Scale-Space Theories in Computer Vision, 247–258.
August J. Zucker S. W. (2000). W. S. (Eds.), Perceptual Organization for Artificial Vision Systems. Norwell, MA: Kluwer Academic Publishers.
Battaglia P. Jacobs R. Aslin R. (2003). Bayesian integration of visual and auditory signals for spatial localization. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 20, 1391–1397. [CrossRef]
Bell A. Sejnowski T. (1997). The “independent components” of natural scenes are edge filters. Vision Research, 37, 3327. [CrossRef] [PubMed]
Bordenave C. Gousseau Y. Rouefi F. (2006). The dead leaves model: A general tessellation modeling occlusion. Advances in Applied Probability, 38, 31–46. [CrossRef]
Buccigrossi R. Simoncelli E. (1999). Image compression via joint statistical characterization in the wavelet domain. IEEE Transactions on Image Processing, 8, 1688–1701. [CrossRef] [PubMed]
Chow C. Jin D. Treves A. (2002). Is the world full of circles? Journal of Vision, 2, (8):4, 571–576, http://www.journalofvision.org/content/2/8/4, doi:10.1167/2.8.4. [PubMed] [Article] [CrossRef]
Corney D. Lotto R. (2007). What are lightness illusions and why do we see them. PLoS Computational Biology, 3, e180.
Ernst M. Banks M. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. [CrossRef] [PubMed]
Felsen G. Touryan J. Han F. Dan Y. (2005). Cortical sensitivity to visual features in natural scenes. PLoS Biology, 3, e342.
Geisler W. Perry J. (2009). Contour statistics in natural images: Grouping across occlusions. Visual Neuroscience, 26, 109–121. [CrossRef] [PubMed]
Geisler W. Perry J. Super B. Gallogly D. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41, 711–724. [CrossRef] [PubMed]
Gnedenko B. (1972). Limit theorems for sums of a random number of positive independent random variables. Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, II, (537–549).
Gousseau Y. Roueff F. (2007). Modeling occlusion and scaling in natural images. Multiscale Modeling and Simulation, 6, 105–134. [CrossRef]
Grenander U. Srivastava A. (2001). Probability models for clutter in natural images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 424–429. [CrossRef]
Hancock P. Baddeley R. Smith L. (1992). The principal components of natural images. Network: Computation in Neural Systems, 3, 61–70. [CrossRef]
Hoffman D. (1998). Visual intelligence: How we create what we see. New York: W W Norton and Co.
Howe C. Purves D. (2005). Natural-scene geometry predicts the perception of angles and line orientation. Proceedings of the National Academy of Sciences, 99, 13184–13188. [CrossRef]
Hubel D. Wiesel T. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of Physiology, 160, 106–154. [CrossRef] [PubMed]
Karklin Y. Lewicki M. (2005). A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Computation, 17, 397–423. [CrossRef] [PubMed]
Kendall W. Thonnes E. (1999). Perfect simulation in stochastic geometry. Pattern Recognition, 32, 1569–1586. [CrossRef]
Kording K. Ku S. Wolpert D. (2004). Bayesian integration in force estimation. Journal of Neurophysiology, 92, 3161–3165. [CrossRef] [PubMed]
Kotz S. Kozubowski T. Podgfiorski K. (2001). The Laplace distribution and generalizations (pp. 30–31). Boston: Birkhäuser.
Langer M. (2008). Surface visibility probabilities in 3D cluttered scenes. Computer Vision—ECCV 2008, 1, 401–412.
Laughlin S. (1981). A simple coding procedure enhances a neuron's information capacity. Zeitschrift für Naturforschung, 36, 910–912.
Lee A. Mumford D. Huang J. (2001). Occlusion models for natural images: A statistical study of a scale-invariant dead leaves model. International Journal of Computer Vision, 41, 35–59. [CrossRef]
Liu Y. Shouval H. (1994). Localized principal components of natural images—An analytic solution. Network: Computation in Neural Systems, 5, 317–325. [CrossRef]
Ma W. Beck J. Latham P. Pouget A. (2006). Bayesian inference with probabilistic population codes. Nature Neuroscience, 9, 1432–1438. [CrossRef] [PubMed]
MacLeod A. (1996). Algorithm 757, miscfun: A software package to compute uncommon special functions. ACM Transactions on Mathematical Software, 22, 288–301. [CrossRef]
Mallat S. (1989). A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2, 674–693. [CrossRef]
Matheron F. (1975). Random sets and integral geometry. New York: John Wiley and Sons.
Mazurek M. Roitman J. Ditterich J. Shadlen M. (2003). A role for neural integrators in perceptual decision making. Cerebral Cortex, 13, 1257–1269. [CrossRef] [PubMed]
Mumford D. Gidas B. (2001). Stochastic models for generic images. Quarterly of Applied Mathematics, 54, 85–111. [CrossRef]
Murray S. Kersten D. Olshausen B. Schrater P. Woods D. (2002). Shape perception reduces activity in human primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 99, 15164–15169. [CrossRef] [PubMed]
Olshausen B. Field D. (1996). Wavelet-like receptive fields emerge from a network that learns sparse codes for natural images. Nature, 381, 607–609. [CrossRef] [PubMed]
Pickard R. Graszyk C. Mann S. Wachman J. Pickard L. (1995). Vistex Database, http://www-white.media.mit.edu/vismod/imagery/VisionTexture/vistex.html, accessed June 2010.
Portilla J. Simoncelli E. (2000). A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40, 49–71. [CrossRef]
Reinagel P. Laughlin S. (2001). Natural stimulus statistics. Network, 12, 237–240. [CrossRef] [PubMed]
Ruderman D. (1997). Origins of scaling in natural images. Vision Research, 37, 3385–3398. [CrossRef] [PubMed]
Ruderman D. L. Bialek W. (1994). Statistics of natural images: Scaling in the woods. Physical Review Letters, 73, 814–817. [CrossRef] [PubMed]
Rust N. C. Movshon J. A. (2005). In praise of artifice. Nature Neuroscience, 8, 1647–1650. [CrossRef] [PubMed]
Saunders J. Knill D. (2001). Perception of 3D surface orientation from skew symmetry. Vision Research, 41, 3163–3185. [CrossRef] [PubMed]
Sharpee T. Bialek W. (2007). Neural decision boundaries for maximal information transmission. PLoS ONE, 2, e646.
Sigman M. Cecchi G. Gilbert C. Magnasco M. (2001). On a common circle: Natural scenes and gestalt rules. Proceedings of the National Academy of Sciences of United States of America, 98, 1935–1940. [CrossRef]
Simoncelli E. Olshausen B. (2001). Natural image statistics and neural representations. Annual Review of Neuroscience, 24, 1193–1216. [CrossRef] [PubMed]
Simoncelli E. Schwartz O. (1999). Modeling surround suppression in V1 neurons with a statistically derived normalization model. Advances in Neural Information Processing Systems, 11, 153–159.
Stocker A. Simoncelli E. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9, 578–585. [CrossRef] [PubMed]
Stuart-Alexander D. (1978). Geologic map of the far side of the moon. US Geological Survey I-1047.
van Hateren J. (1997). Processing of natural time series of intensities by the visual system of the blowfly. Vision Research, 37, 3407–3416. [CrossRef] [PubMed]
van Hateren J. van der Schaaf A. (1998). Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265, 329–366.
Wainwright M. Simoncelli E. (2000). Scale mixtures of Gaussians and the statistics of natural images. Advances in Neural Information Processing Systems, 12, 855–861.
Zetzsche C. Nuding U. (2005). Natural scene statistics and nonlinear neural interactions between frequency-selective mechanisms. Biosystems, 79, 143–149. [CrossRef] [PubMed]
Zhou H. Friedman H. S. von der Heydt R. (2000). Coding of border ownership in monkey visual cortex. Journal of Neuroscience, 20, 6594–6611. [PubMed]
Figure 1
 
Sample images generated by the dead leaves model. We see layers of objects with random sizes, shapes, positions, and brightnesses that occlude other objects below. (A) All objects are black or white circles with a relatively narrow range of sizes. (B) All objects are textured ellipses with a broad range of sizes drawn from a distribution proportional to size−3, producing approximate scale invariance (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). Straightforward generalizations allow other ensembles of shape and texture. (C) Illustration of an object membership function Image not available . Pixels within a member set of Image not available all sampled from the same object. A sample dead leaves image with several objects (gray circles) and a set of six pixel locations (numbered points) are shown. For this configuration, the object membership function is Image not available = {126∣3∣45}.
Figure 1
 
Sample images generated by the dead leaves model. We see layers of objects with random sizes, shapes, positions, and brightnesses that occlude other objects below. (A) All objects are black or white circles with a relatively narrow range of sizes. (B) All objects are textured ellipses with a broad range of sizes drawn from a distribution proportional to size−3, producing approximate scale invariance (Gousseau & Roueff, 2007; Lee et al., 2001; Ruderman, 1997). Straightforward generalizations allow other ensembles of shape and texture. (C) Illustration of an object membership function Image not available . Pixels within a member set of Image not available all sampled from the same object. A sample dead leaves image with several objects (gray circles) and a set of six pixel locations (numbered points) are shown. For this configuration, the object membership function is Image not available = {126∣3∣45}.
Figure 2
 
Joint probabilities of pixel intensities, based on an ensemble of elliptical objects and Gaussian-distributed object intensities with an additive Gaussian white noise texture (Methods section). Contour plots are shown for (A) two pixels and (B) three pixels arranged in an equilateral triangle. These joint distributions are weighted averages of independent and correlated distributions. The weighting factors are the various object membership probabilities P( Image not available ), which are plotted below the joint intensity distributions as a function of the distance between pixels.
Figure 2
 
Joint probabilities of pixel intensities, based on an ensemble of elliptical objects and Gaussian-distributed object intensities with an additive Gaussian white noise texture (Methods section). Contour plots are shown for (A) two pixels and (B) three pixels arranged in an equilateral triangle. These joint distributions are weighted averages of independent and correlated distributions. The weighting factors are the various object membership probabilities P( Image not available ), which are plotted below the joint intensity distributions as a function of the distance between pixels.
Figure 3
 
Log-probability feature distributions log P(f) of spatial derivatives f. (A) Empirically sampled distribution of derivatives (depicted graphically, inset) in natural images. (B, C) Feature probabilities calculated exactly for dead leaves images, using just one and two samples per image patch, respectively (inset). (D) Empirically sampled distributions for dead leaves images using a 16 × 16 grid of samples per patch (inset). (E, F) Mixture components P(fImage not available ) corresponding to (B) and (C), respectively.
Figure 3
 
Log-probability feature distributions log P(f) of spatial derivatives f. (A) Empirically sampled distribution of derivatives (depicted graphically, inset) in natural images. (B, C) Feature probabilities calculated exactly for dead leaves images, using just one and two samples per image patch, respectively (inset). (D) Empirically sampled distributions for dead leaves images using a 16 × 16 grid of samples per patch (inset). (E, F) Mixture components P(fImage not available ) corresponding to (B) and (C), respectively.
Figure 4
 
Joint feature probabilities log P(f1, f2) for wavelet pairs f1 and f2. For orthogonal, colocalized Haar wavelets (inset of (A), shifted for visibility), the contours of the empirically sampled bivariate distribution are diamond-shaped for both (A) natural images and (B) dead leaves images. At high feature amplitudes, certain object configurations have the greatest likelihood and thus dominate the joint distribution. Panel (C) illustrates one such configuration. Colors indicate different objects with unspecified intensities. Dark and light shadings show how the two wavelets weight the image pixels. (D) Specifying only the object geometry (but not the object intensities) gives conditional feature distributions P(f1, f2Image not available ) that are bivariate Gaussians with elliptical contours. For the conditional distributions that dominate at high feature amplitude, the contours trace a diamond-shaped envelope (thick curve) as a function of the relative angles between the object boundary and wavelet orientations (Averaging over image patches section). Parallel, neighboring wavelets (inset of (E)) are anticorrelated, with joint probability contours exhibiting a similar pillow shape in (E) natural images and (F) dead leaves images. Panels (G) and (H) illustrate object configurations that dominate at large feature amplitudes, colored as in (C). (G) If one object covers the opposite-sign lobes of neighboring wavelets while others prevent cancellation by the negative lobe, then the conditional feature distribution will have a negative correlation. (H) Similarly, if one object covers the same-signed lobes of both features while another object prevents cancellation, then the conditional distribution will have a positive correlation. Configurations like this are much less probable than those like (G), because the middle object must have precisely the right size and position. (I) An ensemble of configurations like (G) produce negatively correlated components (gray ellipses) that vary depending on how precisely the objects cover the feature lobes. The positively correlated components (dashed ellipse) caused by configurations like (H) are many times less probable. Discounting the latter gives the mixture distribution an overall negative correlation, leaving components that trace out the pillow-shaped envelope (thick curve) seen in feature distribution contours (Methods section).
Figure 4
 
Joint feature probabilities log P(f1, f2) for wavelet pairs f1 and f2. For orthogonal, colocalized Haar wavelets (inset of (A), shifted for visibility), the contours of the empirically sampled bivariate distribution are diamond-shaped for both (A) natural images and (B) dead leaves images. At high feature amplitudes, certain object configurations have the greatest likelihood and thus dominate the joint distribution. Panel (C) illustrates one such configuration. Colors indicate different objects with unspecified intensities. Dark and light shadings show how the two wavelets weight the image pixels. (D) Specifying only the object geometry (but not the object intensities) gives conditional feature distributions P(f1, f2Image not available ) that are bivariate Gaussians with elliptical contours. For the conditional distributions that dominate at high feature amplitude, the contours trace a diamond-shaped envelope (thick curve) as a function of the relative angles between the object boundary and wavelet orientations (Averaging over image patches section). Parallel, neighboring wavelets (inset of (E)) are anticorrelated, with joint probability contours exhibiting a similar pillow shape in (E) natural images and (F) dead leaves images. Panels (G) and (H) illustrate object configurations that dominate at large feature amplitudes, colored as in (C). (G) If one object covers the opposite-sign lobes of neighboring wavelets while others prevent cancellation by the negative lobe, then the conditional feature distribution will have a negative correlation. (H) Similarly, if one object covers the same-signed lobes of both features while another object prevents cancellation, then the conditional distribution will have a positive correlation. Configurations like this are much less probable than those like (G), because the middle object must have precisely the right size and position. (I) An ensemble of configurations like (G) produce negatively correlated components (gray ellipses) that vary depending on how precisely the objects cover the feature lobes. The positively correlated components (dashed ellipse) caused by configurations like (H) are many times less probable. Discounting the latter gives the mixture distribution an overall negative correlation, leaving components that trace out the pillow-shaped envelope (thick curve) seen in feature distribution contours (Methods section).
Figure 5
 
“Bowtie” shapes appear in empirically sampled conditional feature distributions P(f2f1) for both (A) natural images and (B) dead leaves images. Horizontal and vertical axes represent the coefficients of two neighboring, orthogonal Haar wavelet filters, f1 and f2 (inset of (A)). The grayscale is normalized so black represents 0 and white is the maximum probability for a given f1. (C, D) Two equally probable object configurations, colored as in Figure 4C, have identical f1 but opposite f2. Both features are proportional to the intensity difference between foreground and background objects. (E) Conditional feature distribution with only four samples per feature (inset). Traces of the limited sampling appear as the faint diagonal bands passing through the origin (highlighted with dotted lines on right half). Each distinct band corresponds to a conditional distribution given a different object membership function, P(f2f1, Image not available ). Symmetry ensures that there will be no linear correlation between the two features, even as the width of P(f2f1) increases with ∣f1∣. With features sampled more densely, more such diagonal bands appear, until the bands blend together (B). This produces the distinctive bowtie shape in the conditional feature distributions.
Figure 5
 
“Bowtie” shapes appear in empirically sampled conditional feature distributions P(f2f1) for both (A) natural images and (B) dead leaves images. Horizontal and vertical axes represent the coefficients of two neighboring, orthogonal Haar wavelet filters, f1 and f2 (inset of (A)). The grayscale is normalized so black represents 0 and white is the maximum probability for a given f1. (C, D) Two equally probable object configurations, colored as in Figure 4C, have identical f1 but opposite f2. Both features are proportional to the intensity difference between foreground and background objects. (E) Conditional feature distribution with only four samples per feature (inset). Traces of the limited sampling appear as the faint diagonal bands passing through the origin (highlighted with dotted lines on right half). Each distinct band corresponds to a conditional distribution given a different object membership function, P(f2f1, Image not available ). Symmetry ensures that there will be no linear correlation between the two features, even as the width of P(f2f1) increases with ∣f1∣. With features sampled more densely, more such diagonal bands appear, until the bands blend together (B). This produces the distinctive bowtie shape in the conditional feature distributions.
Figure 6
 
Joint statistics of local edges and global contours. (A) The likelihood ratio that edge pairs in natural images are caused by a common object versus by different objects (replotted from Geisler et al., 2001 with permission). For test edges at many distances, directions, and orientations relative to a reference edge (horizontal bar at origin), line segments are colored to indicate the likelihood ratio (Method section). The segments are sorted so those indicating high likelihoods appear in front. Concentric white rings correspond to unsampled distances. In the dead leaves model, we can define the corresponding likelihood in one of two ways. First, a pair of edges could have a “shared cause” if at least one side of each edge samples from the same object. The resultant likelihood is shown in (B) and an example of a shared cause is shown in (C). Second, we may add a depth constraint to better describe the existence of a shared contour: this shared object must also be on top of the other objects. Using this second definition, (D) shows the likelihoods and (E) gives a sample configuration. These likelihood ratios are on average 60 times lower than in (B) and reveal a pattern very similar to the observations made in natural images (A).
Figure 6
 
Joint statistics of local edges and global contours. (A) The likelihood ratio that edge pairs in natural images are caused by a common object versus by different objects (replotted from Geisler et al., 2001 with permission). For test edges at many distances, directions, and orientations relative to a reference edge (horizontal bar at origin), line segments are colored to indicate the likelihood ratio (Method section). The segments are sorted so those indicating high likelihoods appear in front. Concentric white rings correspond to unsampled distances. In the dead leaves model, we can define the corresponding likelihood in one of two ways. First, a pair of edges could have a “shared cause” if at least one side of each edge samples from the same object. The resultant likelihood is shown in (B) and an example of a shared cause is shown in (C). Second, we may add a depth constraint to better describe the existence of a shared contour: this shared object must also be on top of the other objects. Using this second definition, (D) shows the likelihoods and (E) gives a sample configuration. These likelihood ratios are on average 60 times lower than in (B) and reveal a pattern very similar to the observations made in natural images (A).
Figure 7
 
Comparison of feature distributions for the dead leaves model using different size distributions. The columns show sample images, marginal distributions of Haar wavelets, joint distributions of orthogonal and parallel wavelets, conditional distributions of wavelets, and contour likelihoods, each displayed as in Figures 36. (A–C) Objects are drawn from power-law size ensembles with exponents −2 to −4.
Figure 7
 
Comparison of feature distributions for the dead leaves model using different size distributions. The columns show sample images, marginal distributions of Haar wavelets, joint distributions of orthogonal and parallel wavelets, conditional distributions of wavelets, and contour likelihoods, each displayed as in Figures 36. (A–C) Objects are drawn from power-law size ensembles with exponents −2 to −4.
Figure 8
 
Feature distributions for the dead leaves model with only partially opaque objects, shown as in Figure 7. (A–C) Objects have opacities of 1.0, 0.5, and 0.1.
Figure 8
 
Feature distributions for the dead leaves model with only partially opaque objects, shown as in Figure 7. (A–C) Objects have opacities of 1.0, 0.5, and 0.1.
Figure 9
 
Diagrams illustrating inclusion probabilities Qσ( Image not available , n). (A) A “leaf” function showing the shape of an object. L1(x,ρ) = 1 at points x that are inside an object of shape parameter ρ, and L0(x,ρ) = 1 at points outside it. Here the shape parameter ρ specifies a smooth irregular object. (B) Sample indicator functions Qσ( Image not available , n)(c,ρ) identify location c where an object could be placed to enclose all pixels xiImage not availablen and exclude the rest. Rotated copies of the object shape surround each point xi, designating the location c where that object will enclose xi. An arrow points to the shaded region where an object could be placed to enclose both x1 and x3 but not x2, whose area is Q101(ρ). The other shaded region indicates locations where an object would enclose only x2, whose area is Q010(ρ). Note that this diagram represents possible location c of a single object, not three objects! (C) The indicator functions change shape in a complex manner as a function of the shape parameter ρ, as illustrated for the outlined region shown at four different object sizes. This makes it challenging to integrate their areas over the shape ensemble to obtain the inclusion probabilities Qσ( Image not available , n).
Figure 9
 
Diagrams illustrating inclusion probabilities Qσ( Image not available , n). (A) A “leaf” function showing the shape of an object. L1(x,ρ) = 1 at points x that are inside an object of shape parameter ρ, and L0(x,ρ) = 1 at points outside it. Here the shape parameter ρ specifies a smooth irregular object. (B) Sample indicator functions Qσ( Image not available , n)(c,ρ) identify location c where an object could be placed to enclose all pixels xiImage not availablen and exclude the rest. Rotated copies of the object shape surround each point xi, designating the location c where that object will enclose xi. An arrow points to the shaded region where an object could be placed to enclose both x1 and x3 but not x2, whose area is Q101(ρ). The other shaded region indicates locations where an object would enclose only x2, whose area is Q010(ρ). Note that this diagram represents possible location c of a single object, not three objects! (C) The indicator functions change shape in a complex manner as a function of the shape parameter ρ, as illustrated for the outlined region shown at four different object sizes. This makes it challenging to integrate their areas over the shape ensemble to obtain the inclusion probabilities Qσ( Image not available , n).
Figure 10
 
Simplified representations of object configurations that dominate feature distributions at high amplitudes. Colors indicate objects of unspecified intensity; shading indicates weighting by Haar wavelets. (A) A Haar wavelet Hi takes a difference of intensities I ― 1,i and I ― 2,i, each totaled over a finite region. The pixels Image not availablen2,i contained both in elliptical object Image not availablen and in region 2 of wavelet i are outlined. (B) Colocalized, orthogonal Haar wavelets with circular support. (C, D) Parallel, nearby Haar wavelets, with objects that induce negative and positive correlations, respectively. To simplify the calculations, objects differ only in their horizontal extent and extend completely to either the left or right edge of each wavelet. The relevant variable is then the width of the overlap between the object and the wavelet filter, denoted dl and dr. Compare these simplified configurations to those shown in Figures 4C, 4G, and 4H.
Figure 10
 
Simplified representations of object configurations that dominate feature distributions at high amplitudes. Colors indicate objects of unspecified intensity; shading indicates weighting by Haar wavelets. (A) A Haar wavelet Hi takes a difference of intensities I ― 1,i and I ― 2,i, each totaled over a finite region. The pixels Image not availablen2,i contained both in elliptical object Image not availablen and in region 2 of wavelet i are outlined. (B) Colocalized, orthogonal Haar wavelets with circular support. (C, D) Parallel, nearby Haar wavelets, with objects that induce negative and positive correlations, respectively. To simplify the calculations, objects differ only in their horizontal extent and extend completely to either the left or right edge of each wavelet. The relevant variable is then the width of the overlap between the object and the wavelet filter, denoted dl and dr. Compare these simplified configurations to those shown in Figures 4C, 4G, and 4H.
Figure 11
 
Detailed geometry for Figure 6. (A) An edge exists when an object splits four pixels into two pairs. Pixels arranged in a rectangle with an aspect ratio of Δxy = 2.75 permit a range of edges with a 40° orientation bandwidth as used in Geisler et al. (2001). (B) Pairs of edges thus defined are related by three parameters: distance d, orientation difference θ, and relative direction φ.
Figure 11
 
Detailed geometry for Figure 6. (A) An edge exists when an object splits four pixels into two pairs. Pixels arranged in a rectangle with an aspect ratio of Δxy = 2.75 permit a range of edges with a 40° orientation bandwidth as used in Geisler et al. (2001). (B) Pairs of edges thus defined are related by three parameters: distance d, orientation difference θ, and relative direction φ.
Table 1
 
Object membership functions used for joint edge statistics. For compactness, we represent object membership functions by the pixel indices divided symbolically into ordered or unordered groups. For example, {{x1,x2},{x3,x4}} is written as 12 ∣ 34 if unordered, and as 12 > 34 if ordered such that the object containing points x1 and x2 lies above the object containing x3 and x4. These object membership functions are classified according to whether they reflect a shared cause or different causes for the two edges, using (A) unordered or (B) ordered representations.
Table 1
 
Object membership functions used for joint edge statistics. For compactness, we represent object membership functions by the pixel indices divided symbolically into ordered or unordered groups. For example, {{x1,x2},{x3,x4}} is written as 12 ∣ 34 if unordered, and as 12 > 34 if ordered such that the object containing points x1 and x2 lies above the object containing x3 and x4. These object membership functions are classified according to whether they reflect a shared cause or different causes for the two edges, using (A) unordered or (B) ordered representations.
(A) Classification of unordered Image not available
S: Shared causeD: Different causes
1256 ∣ 34 ∣ 7812 ∣ 34 ∣ 56 ∣ 78
1278 ∣ 34 ∣ 56
12 ∣ 56 ∣ 3478
12 ∣ 78 ∣ 3456
1256 ∣ 3478
1278 ∣ 3456
(B) Classification of ordered Image not available
S: Shared causeD: Different causes
1256 > 34 ∣ 7834 ∣ 78 > 125634 > 1256 > 7878 > 1256 > 34
3478 > 12 ∣ 5612 ∣ 56 > 347812 > 3478 > 5656 > 3478 > 12
1278 > 34 ∣ 5634 ∣ 56 > 127834 > 1278 > 5656 > 1278 > 34
3456 > 12 ∣ 7812 ∣ 78 > 345612 > 3456 > 7878 > 3456 > 12
1256 ∣ 347812 ∣ 34 ∣ 56 ∣ 78
1278 ∣ 3456
Table 2
 
Summary of assumptions used in calculations and their purposes.
Table 2
 
Summary of assumptions used in calculations and their purposes.
AssumptionPurpose
General
   OcclusionImprove realism of model images
   Object independenceSolve general dead leaves model
Shape
   Elliptical shapesFind exact membership probabilities P( Image not available )
   Inverse-cubic size distributionApproximate scale invariance
   Minimum size bigger/smaller than image featureSimulate low-/high-clutter images
Texture
   Gaussian white noise textureSimplify conditional intensities P(IImage not available )
   Low-contrast textureEmphasize consequences of occlusion
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×