Abstract
In daily life, we experience complex visual environments in which numerous visual properties are tightly woven into holistic dimensions. Our visual system warps and compresses this visual input across its multiple stages of operations to arrive at perceptual insights that link to conceptual knowledge. Compelling demonstrations in object perception suggest high-level cognitive functions like categorization can impact how visual processing unfolds to, for example, distinctly biases or distort perception along category-relevant stimulus dimensions. However, whether or not such categorical perception mechanisms similarly impact the perception of real-world scenes remains an important open question. Here, we address this question in a novel learning task in which participants learned to categorize realistic scene images synthesized from an image space defined by continuously varying holistic visual properties. First, participants learned an arbitrary linear category boundary that divided scene space through feedback-based learning. Next, participants completed a visual working memory estimation task in which a target scene was briefly presented, then after a brief delay reconstructed from the continuous scene space. Memory reconstruction errors revealed systematic biases that tracked the subjective nature of each participant’s category learning. Specifically, errors were selectively biased along the diagnostic dimensions defined by participants’ acquired category boundaries. In other words, after only a short category learning session, scenes were remembered as being more similar to their respective learned categories at the expense of their veridical details. These results suggest that our visual system extracts diagnostic dimensions that optimize top-down task goals and actively leverages them for subsequent perception and memory. The highly complex and realistic nature of our stimulus space highlights the dynamic nature of visual perception and high-level cognition in an ecologically valid setting.