September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Characterizing idiosyncrasies in perception and neural representation of real-world scenes
Author Affiliations & Notes
  • Gongting Wang
    Department of Education and Psychology, Freie Universität Berlin
    Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen
  • Matthew Foxwell
    Department of Psychology, University of York
  • Lixiang Chen
    Department of Education and Psychology, Freie Universität Berlin
  • David Pitcher
    Department of Psychology, University of York
  • Radoslaw Martin Cichy
    Department of Education and Psychology, Freie Universität Berlin
  • Daniel Kaiser
    Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen
    Center for Mind, Brain and Behavior, Justus Liebig University Gießen and Philipps University Marburg
  • Footnotes
    Acknowledgements  European Research Council (ERC) starting grant (ERC-2022-STG 101076057); “The Adaptive Mind”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art; China Scholarship Council (CSC)
Journal of Vision September 2024, Vol.24, 538. doi:https://doi.org/10.1167/jov.24.10.538
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gongting Wang, Matthew Foxwell, Lixiang Chen, David Pitcher, Radoslaw Martin Cichy, Daniel Kaiser; Characterizing idiosyncrasies in perception and neural representation of real-world scenes. Journal of Vision 2024;24(10):538. https://doi.org/10.1167/jov.24.10.538.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The efficiency of visual perception is not solely determined by the structure of the visual input. It also depends on our expectations, derived from internal models of the world. Given individual differences in visual experience and brain architecture, it is likely that such internal models differ systematically across the population. Yet, we have no clear understanding of how such differences shape the individual nature of perception. Here, we present a novel approach that uses drawing to directly access the contents of internal models in individual participants. Participants were first asked to draw typical versions of different scene categories (e.g., a kitchen or a living room), taken as descriptors of their internal models. These drawings were converted into standardized 3d renders to control for differences in drawing ability and style. During the subsequent experiments, participants viewed renders that were either based on their drawings (and thus similar to their internal model), based on other people’s drawings, or based on arbitrary scenes they were asked to copy (thereby controlling for memory effects). In a series of behavioral experiments, we show that participants more accurately categorize briefly presented scene renders when they are more similar to their personal internal models. This suggests the efficiency of scene categorization is determined by how well the inputs resemble individual participants’ internal scene models. Using multivariate decoding on EEG data, we further demonstrate that similarity to internal models enhances the cortical representation of scenes, starting from perceptual processing at around 200ms. A deep neural network modelling analysis on the EEG data suggests that scenes that are more similar to participants’ internal models are processed in more idiosyncratic ways, rendering representations less faithful to visual features. Together, our results demonstrate that differences in internal models determine the personal nature of perception and neural representation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×