**Although orientation coding in the human visual system has been researched with simple stimuli, little is known about how orientation information is represented while viewing complex images. We show that, similar to findings with simple Gabor textures, the visual system involuntarily discounts orientation noise in a wide range of natural images, and that this discounting produces a dipper function in the sensitivity to orientation noise, with best sensitivity at intermediate levels of pedestal noise. However, the level of this discounting depends on the complexity and familiarity of the input image, resulting in an image-class-specific threshold that changes the shape and position of the dipper function according to image class. These findings do not fit a filter-based feed-forward view of orientation coding, but can be explained by a process that utilizes an experience-based perceptual prior of the expected local orientations and their noise. Thus, the visual system encodes orientation in a dynamic context by continuously combining sensory information with expectations derived from earlier experiences.**

**Figure 1**

**Figure 1**

*ω*was 4, 8, and 16 c/° (wavelength:

_{s}*λ*= 32, 16, or 8 pixels, respectively), orientation

_{s}*θ*was spaced at 45° intervals from 0° to 135°,

*s*=

_{θ}*x*cos

*+*

_{θ}*y*sin

*, and the standard deviations of the Gaussian envelope were*

_{θ}*σ*=

_{x}*σ*= 0.25

_{y}*λ*. The images were analyzed separately at each spatial frequency, and the response magnitude

_{s}*R*of each filter at each point in the image is The phase

*ϕ*at each point in the image is The interpolated orientation

*L*

_{max}and

*L*

_{min}refer to maximum and minimum luminance, respectively. Finally, summing these Gabor elements composed our synthetic natural image.

*σ*between adjacent elements to avoid overlaps and to match the spacing in the four stimulus classes (Figure 1B). For parallel patterns, the orientation of each element was drawn from a normal distribution with the same mean orientation within one trial and selected randomly from a normal distribution across trials. For circular patterns, the orientation of each element was drawn from a normal distribution with a mean orientation determined by its position relative to the center of the image,

*f*amplitude spectrum that is close to that found in natural images (Field, 1987).

^{2}and a refresh rate of 60 Hz. The display measured 36° horizontally (1152 pixels) and 27° vertically (870 pixels), and was 57 cm from the observer, in a semilit room. The RGB monitor settings were adjusted so that the luminance of green was twice that of red, which in turn was twice that of blue. This shifted the white point of the monitor to 0.31, 0.28 (

*x*,

*y*) at 50 cd/m

^{2}. A bit-stealing algorithm (Tyler, 1997) was used to obtain 10.8 bits (1,785 levels) of luminance resolution on the RGB monitor.

*σ*, where

_{p}*σ*was one of six possible values between 1 and 32 in log steps. The staircase image was identical to the pedestal image, except that the SD of the normal distribution from which the orientation of each element was drawn was increased by Δ

_{p}*σ*under the control of a staircase (Barlow, 1956). The pedestal noise manipulations altered the orientation of 99% of the elements by less than ±90° so that wrapping of orientations did not become an issue, even though a slightly higher percentage wrapping could occur due to the addition of staircase noise (average wrapping at the highest pedestal level was 5.75%).

*σ*is the JND in orientation SD, and

_{θ}*w*is the Weber fraction for orientation variance at pedestal levels exceeding the threshold. The parameters of the model were determined with least-squares minimization, weighted with SDs obtained by bootstrapping, from the estimated JND at each pedestal level and stimulus type.

*σ*to expected response accuracy

*P*(

*c*), given a pedestal level of orientation noise

*σ*, an intrinsic noise level

_{p}*σ*

_{int}as a free parameter, and two degrees of freedom

*d*

_{1}and

*d*

_{2}for the F-distribution. The degrees of freedom reflect how many individual elements an observer used for estimating the variance, and they are both equal to the second free parameter

*M*(

*d*

_{1}=

*d*

_{2}=

*M*) of the model. The parameters of the model were estimated with a maximum-likelihood fit to data vectors consisting of the pedestal noise level, added staircase noise, stimulus class, and the observer's response on each trial.

*R*

^{2}was used to estimate and compare the overall goodness of fit of the two models. Identifiability of the models was assessed from the mutual parameter covariance matrices of each model (Walter & Pronzato, 1996).

*D*, where

*D*is defined as

*α*between the observed and expected orientations of each segment in the image, and where the expected orientations are defined by the a priori template. The inferred posterior could be used to compare the pedestal and staircase images when making a decision about whether one or the other image has a higher noise level. Segments of the image were assumed to be independently and identically distributed; therefore the likelihood of the data is the product of the individual orientation deviations from the template: We assume that the probability of the orientation difference Δ

_{i}*α*of a segment and the template can be described by a zero-mean normal distribution: In this formulation, we omit the observer's measurement noise for the sake of simplicity. While measurement noise influences the final discounting of orientation variance, it is nonspecific with regard to stimulus class. As a consequence, any measurement noise in this model is implicitly embedded in the a priori template.

_{i}*α*′ and prior hyperparameter

*β*′ that scales the variance: We further note that the mean of the prior is equal to the model's prior assumed orientation variance

*M*is the sample size—and in this case the number of orientation segments present in the image. From Equation 13, the maximum a posteriori estimate is derived as the mode of the distribution: As a final step, we implemented a model on which observers can base their decision in each trial. This decision model compares the difference between the estimated overall noise of the pedestal and staircase images and determines when the ratio of the maximum a posteriori estimates of these images exceeds a ratio termed

*W*

_{Bayes}: where

*W*

_{Bayes}indicates how large the difference between the sum of the pedestal and staircase noise and the prior model noise needs to be for an observer to correctly identify the staircase image (see Figure 2A, B). On the other hand, when the pedestal noise is higher than the prior model noise,

*W*

_{Bayes}is equivalent to a Weber fraction—i.e., how much extra staircase orientation noise is needed at each pedestal noise level to perceive a difference (see Figure 2C).

**Figure 2**

**Figure 2**

*β*′ was set to a fixed value, and its alteration had no effect on the performance of the model. The parameters

*W*

_{Bayes}and

*σ*

_{model}were fitted with Equation 15 to the empirically observed thresholds Δ

*σ*by a maximum-likelihood routine to each stimulus class. Next, we used this model with the actual experimental setup, computing the model's choices over a large number of simulated pedestal and staircase images. Finally, the theoretical threshold results were determined by identifying for each pedestal noise and stimulus class the Δ

*σ*that just exceeded

*W*

_{Bayes}and therefore became noticeable.

*σ*= 1°), the JND in orientation SD was relatively high; it gradually fell with added pedestal noise; it reached a minimum at around

_{p}*σ*

_{p}= 16°; and then it rose steeply with the further addition of pedestal noise (

*σ*= 16°:

_{p}*M*= 4.31,

*SE*= 0.46;

*σ*= 32°:

_{p}*M*= 9.54,

*SE*= 1.01; Δ

*M*= 5.24,

*SE*= 0.65,

*p*< 0.01; see Figure 3A brackets). This was true for all observers (Figure 3A) and stimulus classes (Figure 3B).

**Figure 3**

**Figure 3**

*F*(5, 35) = 27.99,

*p*< 0.001,

*F*(3, 21) = 34.2,

*p*< 0.001,

*F*(15, 105) = 2.55,

*p*< 0.01,

*M*= −4.43,

*SE*= 0.64,

*p*= 0.001; circular vs. fractal: Δ

*M*= −5.37,

*SE*= 0.64,

*p*= < 0.001; parallel vs. fractal: Δ

*M*= −4.20,

*SE*= 0.41,

*p*= < 0.001).

*σ*= 1°–4°) and tested with a two-way repeated-measures ANOVA, again using stimulus class and pedestal noise level as within-observer factors. We found a main effect of pedestal noise,

_{p}*F*(2, 14) = 28.94,

*p*= 0.003,

*F*(3, 21) = 9.04,

*p*< 0.001,

*F*(6, 42) = 0.82,

*p*= 0.56,

**Table 1**

**Figure 4**

**Figure 4**

*t*test),

*t*(31) = 2.84,

*p*< 0.01,

*d*= 0.50 (Figure 5A).

**Figure 5**

**Figure 5**

*σ*

_{int}and

*M*for the inefficient-observer model; Θ and

*w*for the hard-threshold model). Low parameter covariance indicates a better identifiability of the model parameterization (Walter & Pronzato, 1996). Overall, the hard-threshold model displayed lower parameter covariance than the inefficient-observer model (paired-samples

*t*test),

*t*(31) = 42.51,

*p*≪ 0.001,

*d*= 8.15 (Figure 5B).

*R*

^{2}for all stimulus classes than the inefficient-observer model across observers. The difference was significant for the fractal (paired-samples

*t*test),

*t*(7) = 2.86,

*p*= 0.02,

*d*= 1.01, and object (paired-samples

*t*test),

*t*(7) = 2.42,

*p*< 0.05,

*d*= 0.86, stimulus classes, and trending significant for the parallel stimulus class (paired-samples

*t*test),

*t*(7) = 2.30,

*p*= 0.055,

*d*= 0.81. A less pronounced modulation in the underlying data of the parallel and circular stimuli classes due to a floor effect makes it hard to confirm true differences between the fitted models of these classes.

*σ*= 8.26) and objects (

_{θ}*σ*= 7.24) than for parallel (

_{θ}*σ*= 5.84) and circular (

_{θ}*σ*= 5.28) patterns. These results corroborate a systematic, class-specific reduction in dip size from fractals to other stimulus classes.

_{θ}*σ*

_{model}showed an orderly progression from circular to fractal stimuli (see the second-to-last row of Table 2). This gradual increase confirmed that the constraining effect of the sensory input compared to the internal template diminishes as one moves to more abstract stimulus classes.

**Figure 6**

**Figure 6**

**Table 2**

*σ*

_{model}for the Bayesian account, systematically elevated toward higher values across classes, as confirmed with paired-samples

*t*tests: For parallel and circular stimuli, the two curves were nearly identical, showing no difference in dip position—Θ:

*t*(7) = 1.84,

*p*= 0.11,

*d*= 0.65;

*σ*

_{model}:

*t*(7) = 0.72,

*p*= 0.49,

*d*= 0.26. The dip's position was significantly higher for object images than for circular images—Θ:

*t*(7) = 3.79,

*p*< 0.01,

*d*= 1.34;

*σ*

_{model}:

*t*(7) = 4.02,

*p*< 0.01,

*d*= 1.42—and trending significantly higher than for parallel images—Θ:

*t*(7) = 2.03,

*p*= 0.08,

*d*= 0.72;

*σ*

_{model}:

*t*(7) = 1.90,

*p*= 0.09,

*d*= 0.68. In turn, for fractals it was still significantly higher than for object stimuli—Θ:

*t*(7) = 3.11,

*p*= 0.02,

*d*= 1.10;

*σ*

_{model}:

*t*(7) = 3.11,

*p*= 0.02,

*d*= 1.10. This pattern confirms that the bottom-up sensory influence on the Gabor elements decreases due to the increasing effect of the internal template (see Table 2, Bayesian model).

*M*parameter reflecting sample size in the original formulation of the inefficient-observer model loses its direct interpretation.

*, 262 (5142), 2042–2044. [PubMed]*

*Science**, 46 (8), 634–639.*

*Journal of the Optical Society of America**, 331 (6013), 83–87, doi:10.1126/science.1195870. [PubMed]*

*Science**, 94 (2), 115–147. [PubMed]*

*Psychological Review**, 10, 433–436. [PubMed]*

*Spatial Vision**, 264 (1380), 431–436, doi:10.1098/rspb.1997.0061. [PubMed]*

*Proceedings of the Royal Society B: Biological Sciences**, 187 (2), 437–445. [PubMed]*

*The Journal of Physiology**, 34 (1), 87–136. [PubMed]*

*The Journal of General Physiology**, 49 (18), 2285–2296, doi:10.1016/j.visres.2009.06.016. [PubMed]*

*Vision Research**, 2 (7), 1160–1169. [PubMed]*

*Journal of the Optical Society of America A**. New York: Oxford Press.*

*Spatial vision**, 4 (12), 2379–2394. [PubMed]*

*Journal of the Optical Society of America A**, 228 (1253), 379–400. [PubMed]*

*Proceedings of the Royal Society B: Biological Sciences**, 43, 2637–2648, doi:10.1016/S0042-6989(03)00441-3. [PubMed]*

*Vision Research**, 13 (9), 891–906.*

*IEEE Transactions on Pattern Analysis and Machine Intelligence**, 36 (4), 193–202. [PubMed]*

*Biological Cybernetics**, 41 (6), 711–724. [PubMed]*

*Vision Research**, 54 (5), 677–696, doi:10.1016/j.neuron.2007.05.019. [PubMed]*

*Neuron**. Oxford, UK: Oxford University Press.*

*Visual pattern analyzers**, 195 (1), 215–243. [PubMed]*

*The Journal of Physiology**, 352 (1358), 1275–1282, doi:10.1098/rstb.1997.0110. [PubMed]*

*Philosophical Transactions of the Royal Society B: Biological Sciences**, 41 (5), 585–598. [PubMed]*

*Vision Research**, 70 (12), 1458–1471. [PubMed]*

*Journal of the Optical Society of America**, 92 (18), 8135–8139. [PubMed]*

*Proceedings of the National Academy of Sciences, USA**. San Francisco: W.H. Freeman.*

*Vision**, 18 (9), 2209–2219. [PubMed]*

*Journal of the Optical Society of America A**, 14 (10), 1039–1042. [PubMed]*

*Vision Research**, 34 (6), 2374–2388, doi:10.1523/JNEUROSCI.1755-13.2014. [PubMed]*

*The Journal of Neuroscience**, 7 (2), 333–339, doi:10.1088/0954-898X/7/2/014. [PubMed]*

*Network**, 24 (2), 121–128. [PubMed]*

*Vision Research**, 4 (7), 739–744. [PubMed]*

*Nature Neuroscience**, 10 (4), 437–442. [PubMed]*

*Spatial Vision**, 2 (2), 147–155. [PubMed]*

*Journal of the Optical Society of America A**, 2 (11), 1019–1025. [PubMed]*

*Nature Neuroscience**, 387 (6630), 281–284. [PubMed]*

*Nature**, 1 (3), 121–142, doi:10.1068/i0384. [PubMed]*

*i-Perception**, 8 (7), 522–535, doi:10.1038/nrn2155. [PubMed]*

*Nature Reviews Neuroscience**, 104 (15), 6424–6429. [PubMed]*

*Proceedings of the National Academy of Sciences, USA**, 71 (3), 435–443, doi:10.3758/APP.71.3.435. [PubMed]*

*Attention, Perception & Psychophysics**, 15 (8), 5448–5465. [PubMed]*

*The Journal of Neuroscience**, 14 (12), 1409–1420. [PubMed]*

*Vision Research**, 10 (4), 369–377. [PubMed]*

*Spatial Vision**, 42 (2–3), 125–134.*

*Mathematics and Computers in Simulation**, 23 (12), 1465–1477. [PubMed]*

*Vision Research*