**The statistics of real world images have been extensively investigated, but in virtually all cases using only low dynamic range image databases. The few studies that have considered high dynamic range (HDR) images have performed statistical analyses categorizing images as HDR according to their creation technique, and not to the actual dynamic range of the underlying scene. In this study we demonstrate, using a recent HDR dataset of natural images, that the statistics of the image as received at the camera sensor change dramatically with dynamic range, with particularly strong correlations with dynamic range being observed for the median, standard deviation, skewness, and kurtosis, while the one over frequency relationship for the power spectrum breaks down for images with a very high dynamic range, in practice making HDR images not scale invariant. Effects are also noted in the derivative statistics, the single pixel histograms, and the Haar wavelet analysis. However, we also show that after some basic early transforms occurring within the eye (light scatter, nonlinear photoreceptor response, center-surround modulation) the statistics of the resulting images become virtually independent from the dynamic range, which would allow them to be processed more efficiently by the human visual system.**

^{1}

*f*-stops. Images are provided as equirectangular projections of an HDR spherical image panorama and each pixel represents an angle in space of 4 minutes of arc. An example of a simply tone-mapped image is presented in Figure 1. The camera captures the full field of view by rotating about a point in space and uses multi-exposure capture and fusion so as to be able to record HDR content. Details about the database creation are provided in Adams et al. (2016). Exposure time, ISO and the aperture size that centered the median intensity value of each image are provided, and these values were used to get the irradiance maps.

*equi2cubic*function. All bottom faces show the presence of the tripod and were therefore discarded; the top faces were discarded as well due to the small amount of information provided by the upper views, mostly representing the sky, combined with some difficulties in reprojecting them properly, as observed in Figure 2. Given that humans rarely look straight upwards, and infrequently straight downwards, our analyses might better represent the statistics that humans typically encounter. Each 1,712 × 1,712 pixels square image corresponds to a 90° × 90° field of view.

*Y*and 99.99 percentile for

_{min}*Y*. Please note that

_{max}*Y*and

_{min}*Y*are computed separately for each image.

_{max}*θ*corresponds to the viewing angle from the point from which the light is spread in degrees, and

*a*and

*p*are parameters depending respectively on the age and the pigmentation of the eye of the subject. For this study, an age of 25 years and a value of 0.5 for brown eyes were chosen arbitrarily. The resulting point spread function (PSF) filter was then applied by convolution to the images of our database with the same method as in McCann and Vonikakis (2017).

*R*is the response to the light stimulus,

*R*the peak response,

_{max}*I*the incident light, and

*I*is the semi-saturation level (i.e., the intensity that causes half-maximum response); a global geometric average of the median and the mean was chosen for the computation of

_{s}*I*as done in Ferradans, Bertalmío, Provenzi, and Caselles (2011).

_{s}*f*corresponds to the frequency in cycles/degree,

*r*and

_{c}*r*to the radii of the center and surround areas, and

_{s}*K*,

*k*and

_{c}*k*are the parameters of the filter;

_{s}*r*,

_{c}*r*and the ratio

_{s}*k*and

_{c}*K*are respectively chosen as 1 and 10 to get transformed values in the same order of magnitude as the ones before, applying CSF.

*log*(

*I*(

*i,j*))

*− median*(

*log*(

*I*)). Average histograms are then computed for each DR bracket and the results are plotted on log-log axes. In Figure 4 we plot the average histograms of the original linear image in subplot A and remaining transforms in subplots B, C, and D. Each color represents a different DR category.

*k*= 4.56; 1999) and the kurtosis found for the histogram of the highest DR bracket (

*k*= 4.89, cf. Table 2) confirms this observation. We speculate upon why this is the case in the Discussion where we replicate the original findings.

*log*(

*I*(

*i*,

*j*))

*− log*(

*I*(

*i*,

*j*+ 1))), represented in Figure 5. Again, the statistical moments of the histograms are stored in Table 3 provided in the annex at the end of the document.

*k*= 16.81, is very similar to the one reported by Huang and Mumford, 1999, which was

*k*= 17.43.

*1/f*

^{2+η}power law relationship, with

*η*called the “anomalous exponent,” usually small. An implication of this feature is that natural images are scale-invariant (Burton & Moorhead, 1987; Field, 1987; Ruderman & Bialek, 1994; van der Schaaf & van Hateren, 1996; Tolhurst, Tadmor, & Chao, 1992).

*f*

^{2}relationship is recovered. Finally, Figure 6D depicts the result of the center-surround transform, which decreases the energy in the lower frequencies and also in the frequencies over 100 cycles/image. The CSF acts like a band pass filter resulting in the effect observed in Figure 6D.

*P*(

*x*) =

*ax*+

^{2}*bx*+

*c*, where

*a*= 0 would correspond to the scale-invariant case where the power spectrum can be approximated by a linear function, and

*a*≠ 0 implies some curvature in the fit. These fits were performed up to 200 cycles/image to avoid an overestimation of the fitting error that would be produced by the drop on the high frequencies due to the PSF.

*a*as a function of DR, where negative values of

*a*correspond to an inverted U-shape. The blue dots show the fits to the original images, and it can be seen that the coefficient

*a*becomes more negative while increasing DR. The correlation coefficient plotted in Figure 7 is significant and the effect is clear on the figure. The red dots show the fits to the data after the image has been passed through the eye's PSF and the Naka-Rushton equation. As the figure shows, the fits do not exhibit the same degree of negative curvature at high DR values. The correlation coefficient is greatly reduced and one can observe that the

*a*values remain around zero. In Figure 8A, we plot the fitting error for a first and second order polynomial fit respectively in blue and red for the camera sensor output, and in Figure 8B, we plot the error for the images after modeling the effect of the light scatter and the photoreceptor response. The main observation that can be made is that the curvature of the power spectra is reduced after the application of the Naka-Rushton equation. Indeed, in Figure 7 the absolute value of the

*a*coefficients are below 0.1. And, although Figure 8A shows that even a second-order fit may have significant error, the application of the Naka-Rushton equation allows for the power spectra to be well approximated by a first-order polynomial, as the small errors of Figure 8B suggest.

*a*close to zero or a small error of the first order polynomial fit in Figure 8A, showing that some HDR images comply with the 1/

*f*rule. An example will be shown in the Discussion section.

*horizontal component*,

*vertical component*, and

*diagonal component*refer to the wavelet coefficients of the first sub-band for each orientation. According to the nomenclature used in Huang and Mumford, 1999, these components are cousins to one another, coefficients at adjacent spatial locations in the same sub-band are called brothers, and coefficients that correspond to the same spatial location at different scales are called parent and child.

*f*(

*x*) =

*−x*axis: neighbors become then less similar.

*increases*the DR, as observed in Figure 4D, which is consistent with contrast enhancement properties of the visual system (Martinez, Molano-Mazon, Wang, Sommer, & Hirsch, 2014) and psychophysical studies of the DR of the visual system (Kunkel & Reinhard, 2010). The CSF transform also widens the derivative histograms (Figure 5D), flattens the power spectra for frequencies below 100 cycles/image (Figure 6D), and reduces the similarity among neighbors (Figure A3). These properties support the decorrelation hypothesis (Atick & Redlich, 1992) and the response equalization hypothesis (Field, 1987).

*a*and a high DR. An example of this image with its power spectrum is presented in Figure 9. In order to be displayed properly, the image is tone mapped so one can't see the specularities in the car and the truck in the background. This scene may abide by the 1

*/f*law because of the multiple specularities and their distribution in space, as opposed to a local light source that would indeed flatten the power spectrum.

*log*(

*I*)

*− average*(

*log*(

*I*)). Thus, it was not clear whether average referred to the mean, the median, or some other computation. Additionally, the van der Schaaf and van Hateren image database contains two images sets denoted

*.imi*and

*.imc*, the former being images that are linearly related to the sensor values and the latter having a correction for the optics of the camera applied. In Figure 11, we test the four possible combinations using the van der Schaaf and van Hateren image database. The results demonstrate that we only obtain the characteristic linear slopes when we use the

*.imc*optically corrected image dataset and subtract each image using the image median. As such, we use the median to compute the histograms in Figure 4. The gray dashed box highlights the region illustrated in the original study by Huang and Mumford (1999), which is substantially smaller than the region we plot here, even if one can argue that these parts of the histograms contain a very small number of pixels. When plotted over this greater range we do not obtain a straight line over the full range of positive values. In the SYNS dataset, images are not optically corrected and it may be a reason why we do not observe linear parts on the histograms of small DR categories. It is to be noted that the number of images per category is much smaller in this study than in the previous one; we have about 70 images for each category, while in Huang and Mumford (1999), the dataset contains more than 4,000 natural scenes. The type of scenes can then affect the histograms; for example, in Figure 4A, one can observe a peak in the positive part of the histograms. This peak may correspond to the pixels forming the sky as the SYNS dataset contains a lot of scenes with the horizon creating bimodal intensity distributions.

^{7}), but also scenes with deep shadows and high contrast that span the mid-dynamic ranges.

*f*

^{2}behavior for the power spectrum.

*, 6: 35805.*

*Scientific Reports**, 4 (2), 196–210.*

*Neural Computation**, 61 (3), 183–193.*

*Psychological Review**(pp. 217–234). Cambridge, MA: MIT Press.*

*Sensory communication**. London, UK: CRC Press.*

*Image processing for cinema**, 26 (1), 157–170.*

*Applied Optics**(pp. 369–378). New York, NY: ACM Press/Addison-Wesley Publishing Co.*

*Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques**(pp. II–II). New York, NY: IEEE.*

*Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001: Vol. 2**, 187 (3), 517–552.*

*The Journal of Physiology**, 2007 (pp. 233–238). Albuquerque, NM: Society for Imaging Science and Technology.*

*15th Color and Imaging Conference**, 33 (10), 2002–2012.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 4 (12), 2379–2394.*

*Journal of the Optical Society of America A**, 11 (12): 14, 1–17, https://doi.org/10.1167/11.12.14. [PubMed] [Article]*

*Journal of Vision**(pp. 215–222). New York, NY: IEEE.*

*Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010. CVPR 2010**(pp. 541–547). New York, NY: IEEE.*

*Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999. CVPR 1999, Vol. 1**(pp. 17–24). New York, NY: ACM.*

*Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization**, 81 (4), 943–956.*

*Neuron**, 8: 2079.*

*Frontiers in Psychology**, 185 (3), 536–555.*

*The Journal of Physiology**J. M. DiCarlo & B. G. Rodricks (Eds.),*(p. 68170N). San Jose, CA: SPIE.

*Proceedings of SPIE, the International Society for Optical Engineering, Digital Photography IV,*Vol. 6817*(pp. 9–16). New York, NY: ACM.*

*Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization**, 38 (4), 195–206.*

*Trends in Neurosciences**, 73 (6), 814–817.*

*Physical Review Letters**, 38, 221–246.*

*Annual Review of Neuroscience**, 27 (3), 379–423.*

*Bell System Technical Journal**, 3, 263–346.*

*Progress in Retinal Research**, 24 (1), 1193–1216.*

*Annual Review of Neuroscience**, 386 (6620), 69–73.*

*Nature**, 73 (9), 1143–1148.*

*Journal of the Optical Society of America A**, 12 (2), 229–232.*

*Ophthalmic and Physiological Optics**, 36 (17), 2759–2770.*

*Vision Research**, 265 (1394), 359–366.*

*Proceedings: Biological Sciences**, 135 (1), 1–9.*

*CIE Collection on Glare**, (pp. 337–342). Scottsdale, AZ: Society for Imaging Science and Technology.*

*Color and Imaging Conference, Vol. 10*