December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Category learning biases in real-world scene perception
Author Affiliations & Notes
  • Gaeun Son
    University of Toronto
  • Dirk B. Walther
    University of Toronto
  • Michael L. Mack
    University of Toronto
  • Footnotes
    Acknowledgements  Natural Sciences and Engineering Research Council (NSERC) Discovery Grants (RGPIN-2017-06753 to MLM and RGPIN-2020-04097 to DBW) and Canada Foundation for Innovation and Ontario Research Fund (36601 to MLM).
Journal of Vision December 2022, Vol.22, 4123. doi:https://doi.org/10.1167/jov.22.14.4123
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gaeun Son, Dirk B. Walther, Michael L. Mack; Category learning biases in real-world scene perception. Journal of Vision 2022;22(14):4123. https://doi.org/10.1167/jov.22.14.4123.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In daily life, we experience complex visual environments in which numerous visual properties are tightly woven into holistic dimensions. Our visual system warps and compresses this visual input across its multiple stages of operations to arrive at perceptual insights that link to conceptual knowledge. Compelling demonstrations in object perception suggest high-level cognitive functions like categorization can impact how visual processing unfolds to, for example, distinctly biases or distort perception along category-relevant stimulus dimensions. However, whether or not such categorical perception mechanisms similarly impact the perception of real-world scenes remains an important open question. Here, we address this question in a novel learning task in which participants learned to categorize realistic scene images synthesized from an image space defined by continuously varying holistic visual properties. First, participants learned an arbitrary linear category boundary that divided scene space through feedback-based learning. Next, participants completed a visual working memory estimation task in which a target scene was briefly presented, then after a brief delay reconstructed from the continuous scene space. Memory reconstruction errors revealed systematic biases that tracked the subjective nature of each participant’s category learning. Specifically, errors were selectively biased along the diagnostic dimensions defined by participants’ acquired category boundaries. In other words, after only a short category learning session, scenes were remembered as being more similar to their respective learned categories at the expense of their veridical details. These results suggest that our visual system extracts diagnostic dimensions that optimize top-down task goals and actively leverages them for subsequent perception and memory. The highly complex and realistic nature of our stimulus space highlights the dynamic nature of visual perception and high-level cognition in an ecologically valid setting.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×