**Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art “computer vision” methods that utilize highly engineered image features and sophisticated machine learning algorithms.**

^{1}statistics (NSS) models have been shown to provide good descriptions of the statistical laws that govern the behavior of images of the 3D world and 2D images of it. NSS models have proven to be deeply useful tools for both understanding the evolution of human vision systems (HVS; Olshausen & Field, 1996; Simoncelli & Olshausen, 2001) and for modeling diverse visual problems (Portilla, Strela, Wainwright, & Simoncelli, 2003; Tang, Joshi, & Kapoor, 2011; Wang & Bovik, 2011; Bovik, 2013). In particular, there has been work conducted on exploring the 3D NSS of depth/disparity maps of the world, how they correlate with 2D luminance/color NSS, and how such models can be applied. For example, Potetz and Lee (2006) examined the relationships between luminance and range over multiple scales and applied their results to a shape-from-shading problem. Y. Liu, Cormack, and Bovik (2011) explored the statistical relationships between luminance and disparity in the wavelet domain, and applied the derived models to improve a canonical Bayesian stereo algorithm. Su, Cormack, and Bovik (2013) proposed new models of the marginal and conditional statistical distributions of the luminances/chrominances and the disparities/depths associated with natural images, and used these models to significantly improve a chromatic Bayesian stereo algorithm. Recently, Su, Cormack, and Bovik (2014b, 2015a) developed new bivariate and correlation NSS models that capture the dependencies between spatially adjacent bandpass responses like those of area V1 neurons, and applied them to model both natural images and depth maps. The authors further utilized these models to create a blind 3D perceptual image quality model (Su, Cormack, & Bovik, 2015b) that operates on distorted stereoscopic image pairs. An algorithm derived from this model was shown to deliver quality predictions that correlate very highly with recorded human subjective judgments of 3D picture quality.

^{2}

**x**∈ ℝ

*,*

^{N}*y*∈ ℝ

^{+}. Note that when

**x**∈ ℝ

*, which may embed dependencies in*

^{N}**x**∈ ℝ

*(i.e., the spatially neighboring bandpass image responses). In order to capture these second-order statistics, we adopt a closed-form correlation model, which is described in detail in the next subsection, to extract the corresponding NSS features. In our implementation, we model the bivariate empirical histograms of horizontally adjacent subband responses of each image patch using a bivariate generalized Gaussian distribution (BGGD) with*

^{N}**x**∈ ℝ

^{2}, and estimate the BGGD model parameters using the maximum likelihood estimator (MLE) algorithm described in (Su, Cormack, & Bovik, 2014a). In our case, the scatter matrix

*θ*

_{2}−

*θ*

_{1}=

*kπ*,

*k*∈ ℤ, yielding a three-parameter exponentiated cosine model:

*D*(

*x*+ 1,

*y*) −

*D*(

*x*− 1,

*y*)) (

*D*(

*x*,

*y*+ 1) −

*D*(

*x*,

*y*− 1))]

^{⊤}

**x**=

**f**

*∈ ℝ*

_{I}*. For each canonical depth pattern, an MGM model is created using the feature vectors extracted from all of the image patches within the pattern. Therefore, the likelihood of encountering an image patch with a specific extracted feature*

^{K}*y*-coordinate,

^{3}which reflects the performance consistencies of the examined depth estimation algorithms. Natural3D delivers more consistent performance in terms of Log10, while providing similar or better Rel. and RMS performances than Depth Transfer.

*k*-means algorithm for learning the depth prior. To demonstrate the influence of the number of canonical depth patterns on the performance of Natural3D, we trained and tested the algorithm using different numbers of clusters in the

*k*-means algorithm, and plotted in Figure 23 the three error metrics as a function of the number of canonical depth patterns. It can be seen that, while the relative error slightly drops as the number of canonical depth patterns increases, the RMS value increases adversely. This result suggests that while it may be helpful to estimate relative distances between objects using more canonical depth patterns, the increased number of depth priors may result in inferior regression performance when estimating absolute distances. This result also agrees with our observation during the prior model development that five most common canonical depth patterns exist in natural environments. Therefore, using more than five clusters in the

*k*-means algorithm may result in some redundant depth patterns, so the regression model of those redundant depth patterns will be trained with incomplete image data when estimating absolute distances, because the extracted image features belonging to similar depth patterns may be inaccurately classified into different clusters to train different regression models. As a result, to achieve the best depth estimation performance, we chose to use five canonical depth patterns: five clusters in the

*, 145– 152.*

*Proceedings of the IEEE Winter Conference on Applications of Computer Vision**, 101 (9), 2008– 2024.*

*Proceedings of the IEEE**, 2, 121– 167.*

*Data Mining and Knowledge Discovery**, 2 (3): 27, 1– 27. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm/.*

*ACM Transactions on Intelligent Systems and Technology (TIST)**, 22 (6), 707– 717.*

*Pattern Recognition**, 34 (5), 607– 620.*

*Vision Research**, 1, 886– 893.*

*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition**, 2, 2418– 2428.*

*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition**, 39 (1), 1– 38.*

*Journal of the Royal Statistical Society, Series B**, 26 (6), 973– 990.*

*Vision Research**, 26, 2366– 2374.*

*Advances in Neural Information Processing Systems**, 32 (9), 1627– 1645.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 4 (12), 2379– 2394.*

*Journal of the Optical Society of America A**Philosophical Transactions of the Royal Society of London*.

*, 357 (1760), 2527– 2542.*

*Series A: Mathematical, Physical and Engineering Sciences**, 3392– 3399.*

*Proceedings of the IEEE International Conference on Computer Vision**, 15– 22.*

*Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop**, 9 (2), 181– 197.*

*Visual Neuroscience**, 24 (3), 577– 584.*

*ACM Transactions on Graphics**, 30 (12), 1955– 1970.*

*Vision Research**, 47 (3), 292– 326.*

*Computer Vision, Graphics, and Image Processing**, 7576, 775– 788.*

*Proceedings of the European Conference on Computer Vision**, 89– 96.*

*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition**, 3 (2), 202– 211.*

*IEEE Journal of Selected Topics in Signal Processing**, 683– 691.*

*Proceedings of the IEEE International Conference on Computer Vision**, 1253– 1260.*

*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition**, 20 (9), 2515– 2530.*

*IEEE Transactions on Image Processing**, 28 (2), 129– 137.*

*IEEE Transactions on Information Theory**, 2, 1150– 1157.*

*Proceedings of the IEEE International Conference on Computer Vision**, 23, 2942– 2973.*

*Neural Computation**, 48 (2), 75– 90.*

*International Journal of Computer Vision**, 23 (2), 149– 168.*

*International Journal of Computer Vision**, 11 (7), 674– 693.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 37 (12), 2091– 2110.*

*IEEE Transactions on Acoustics, Speech, and Signal Processing**, 11 (2), 431– 441.*

*Journal of the Society for Industrial & Applied Mathematics**Microsoft Kinect for Windows [Computer software]*. Redmond, WA: Microsoft. Available at http://www.microsoft.com/en-us/kinectforwindows/

*, 20 (12), 3350– 3364.*

*IEEE Transactions on Image Processing**, 2, 561– 564.*

*Proceedings of the IEEE International Conference on Image Processing**, 42 (3), 145– 175.*

*International Journal of Computer Vision**, 7 (2), 333– 339.*

*Network: Computation in Nerual Systems**, 17 (8), 1665– 1699.*

*Neural Computation**(pp. 33– 40). Piscataway, NJ: IEEE Publishing.*

*Proceedings of the IEEE International Conference on Computer Vision**, 40 (1), 49– 70.*

*International Journal of Computer Vision**, 12 (11), 1338– 1351.*

*IEEE Transactions on Image Processing**, 20 (7), 1292– 1303.*

*Journal of the Optical Society of America A**, 18, 1089– 1096.*

*Advances in Neural Information Processing Systems**.*

*SPIE International Conference on Human Vision and Electronic Imaging XV*, Vol. 7527*, 5 (4), 517– 548.*

*Network: Computation in Neural Systems**, 17, 1161– 1168.*

*Advances in Neural Information Processing Systems**, 31 (5), 824– 840.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 12 (5), 1207– 1245.*

*Neural Computation**, 19, 1303– 1314.*

*Vision Research**, 21 (6), 456– 458.*

*Bulletin of the Psychonomic Society**, 4, 819– 825.*

*Nature Neuroscience**, 29 (3), 411– 426.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 5 (1), 52– 56.*

*IEEE Transactions on Circuits and Systems for Video Technology**, 15 (2), 430– 444.*

*IEEE Transactions on Image Processing**. Berlin, Heidelberg: Springer-Verlag.*

*Proceedings of the European conference on computer vision*(Vol. 5, pp. 746–760)*, 3813, 188– 195.*

*Proceedings of SPIE, Wavelet Applications in Signal and Image Processing VII**, 3, 444– 447.*

*IEEE International Conference on Image Processing**, 24 (1), 1193– 1216.*

*Annual Review of Neuroscience**, 373– 377.*

*IEEE Global Conference on Signal and Information Processing**, 22 (6), 2259– 2274.*

*IEEE Transactions on Image Processing**.*

*Proceedings of SPIE, Human Vision and Electronic Imaging XIX*, 9014*, 5362– 5366.*

*Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing**, 22 (1), 21– 25.*

*IEEE Signal Processing Letters**, 24 (5), 1685– 1699.*

*IEEE Transactions on Image Processing**, 305– 312.*

*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition**, 24 (24), 1226– 1238.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 251, 140– 142.*

*Nature**, 15 (5), 583– 590.*

*Vision Research**(pp. 199– 295). Oxford, UK: Butterworth-Heinemann.*

*Vergence eye movements: Basic and clinical aspects**(p. 203– 222). Cambridge, MA: MIT Press.*

*Probabilistic models of the brain: Perception and neural function**, 28 (6), 29– 40.*

*IEEE Signal Processing Magazine**, 21 (8), 690– 706.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 5 (1), 61– 63.*

*Journal of the Society for Information Display*