Humans are confronted by a visual world whose structure is biased in a number of ways. It has been argued previously that the neural encoding of complex scenic properties may be made efficient or “sparse” by taking such biases into account (Attick & Redlich,
1990; Barlow,
1989; Bex, Solomon, & Dakin,
2009; Brenner, Bialek, & de Ruyter van Steveninck,
2000; Field & Brady,
1997; Hansen & Essock,
2004; Wainwright,
1999). It has been further argued that visual experience with these biases, such as the distribution of spatial scales and colors of content in recently-viewed visual scenes, adapts our visual system to discount them and keep perceptions veridical in the face of changing environments (Cecchi, Rao, Xiao, & Kaplan,
2010; Shepard,
1992; Webster & MacLeod,
2011; Webster & Miyahara,
1997). Take, for example, the natural yellowing of an individual's lens across decades of age: The individual's perceptual encoding mechanisms also change to maintain an unchanging perception of “neutral” (Webster, Werner, & Field,
2005). Another fundamental property of the visual world, orientation of structural content in the scene, also has a biased distribution: Typical scenes, both natural and “carpentered,” are biased with more/stronger content at some orientations than others (Baddeley & Hancock,
1991; Coppola, Purves, McCoy, & Purves,
1998; Girshick, Landy, & Simoncelli,
2011; Hancock, Baddeley, & Smith,
1992; Keil & Cristobal,
2000; Switkes, Mayer, & Sloan,
1978; see review in Hansen & Essock,
2004). Specifically, due to the horizon, foreshortening, and phototropic/gravitropic growth (even aside from additional, “carpentered world” properties), the average scene contains most content around horizontal, second most around vertical, and least near the oblique orientations (45
° and 135
°: Hansen & Essock,
2004). The processing of orientation in the human visual system is biased in the opposite way: Suppression by content in a broadband image is strongest for horizontal content, least for oblique content, and intermediate for vertical content (Essock, DeFord, Hansen, & Sinai,
2003; Essock, Haun, & Kim,
2009; Hansen & Essock,
2004; Hansen & Essock,
2006). This
horizontal effect pattern of anisotropy
1 is seen in suppression (both surround and overlay suppression, as well as general, large-field, suppression) and roughly matches the magnitude of the anisotropic bias (H>V>Ob) observed in average scene content (Essock et al.,
2009). This anisotropic suppression would be an efficient way for humans to encode orientation, whitening the neural representation and serving to perceptually emphasize scene content (objects) that deviates structurally from the normal background of scenes (e.g., Essock et al.,
2003; Essock et al.,
2009; Hansen & Essock,
2004; Hansen & Essock,
2005; Hansen et al.,
2015). However, it is not yet known if this horizontal-effect pattern of anisotropic suppression serves to undo the orientation-biased content on a long-term (e.g., evolutionary) timescale or a recent-past timescale, perhaps even adjusting perception of oriented structure “on the fly” based on the current visual world. Here, we addressed whether the human visual system alters the perceptual salience of oriented structure to compensate for the distributions of oriented content in the just-experienced visual world. We evaluated this idea in the context of a previously developed Bayesian model of malleable orientation salience in which perceptual bias and variability are related to the prior expectations and likelihood of a given orientation of content in the environment (Girshick, Landy, & Simoncelli,
2011; Stocker & Simoncelli,
2006). Under this approach, the parameterization of the likelihood is inferred from observers' behavior and the prior probability is modeled from the empirically observed distribution of orientation content in the environment.