Purchase this article with an account.
Anirvan S. Nandy, John H. Reynolds, Tatyana O. Sharpee; Orientation statistics of natural scenes: spatial-scale and temporal aspects. Journal of Vision 2011;11(11):1162. doi: 10.1167/11.11.1162.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
There is evidence that the functional properties of the visual cortex as well as human behavioral performance are attuned to the statistics of the visual world (Karklin & Lewicki, 2009; Geisler et al., 2001). Within the orientation domain, pairs of oriented edges in natural scenes exhibit properties of smooth continuation (co-linearity and co-circularity) when examined at a fine spatial resolution (Sigman et al., 2001). However, in foveated visual systems, spatial resolution falls off precipitously with eccentricity. It is thus of interest to evaluate orientation statistics at the coarser resolution that applies in peripheral vision.
To examine this, we estimated pair-wise mutual information between oriented elements on the entire set of about 4200 natural images in the Van Hateren image database. We find the same pattern of smooth continuation at the finer spatial scales as in previous studies. However, for coarser spatial scales, we find significant violations of smooth continuation: with pairs instead exhibiting shared orientation that is insensitive to co-linearity, and little or no co-circularity. Examining the frequency of co-occurring oriented elements (adjacent pairs, triplets and so on) we find that the most frequently occurring patterns at the finer spatial scales are smoothly curved, whereas those at the coarser scales consist of iso-oriented repeated patterns. These results suggest a vocabulary of shapes and patterns that neurons with coarser spatial coding (neurons with more eccentric receptive fields, and neurons in higher order visual areas) might be tuned to.
The second question we examined was how much of the information content was preserved across time in natural scene movies. Using a fine spatial scale, from a set of about 600 movies, we find that contextual information disappears after about 60 milli-seconds. This finding might suggest temporal limits that may apply to neuronal mechanisms tuned to regularities in natural scenes.
This PDF is available to Subscribers Only