September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Deciphering visual representations behind subjective perception using reconstruction methods
Author Affiliations & Notes
  • Fan L. Cheng
    ATR Computational Neuroscience Laboratories
    Kyoto University
  • Tomoyasu Horikawa
    ATR Computational Neuroscience Laboratories
    NTT Communication Science Laboratories
  • Kei Majima
    Kyoto University
  • Misato Tanaka
    ATR Computational Neuroscience Laboratories
    Kyoto University
  • Mohamed Abdelhack
    Kyoto University
    Krembil Centre for Neuroinformatics
  • Shuntaro C. Aoki
    ATR Computational Neuroscience Laboratories
    Kyoto University
  • Jin Hirano
    Kyoto University
  • Yukiyasu Kamitani
    ATR Computational Neuroscience Laboratories
    Kyoto University
  • Footnotes
    Acknowledgements  Japan Society for the Promotion of Science KAKENHI grant JP20H05705 and JP20H05954, Japan Science and Technology Agency CREST grant JPMJCR18A5 and JPMJCR22P3 and SPRING grant JPMJSP2110, New Energy and Industrial Technology Development Organization Project JPNP20006
Journal of Vision September 2024, Vol.24, 474. doi:https://doi.org/10.1167/jov.24.10.474
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fan L. Cheng, Tomoyasu Horikawa, Kei Majima, Misato Tanaka, Mohamed Abdelhack, Shuntaro C. Aoki, Jin Hirano, Yukiyasu Kamitani; Deciphering visual representations behind subjective perception using reconstruction methods. Journal of Vision 2024;24(10):474. https://doi.org/10.1167/jov.24.10.474.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Reconstruction techniques have been widely used to recover physical sensory inputs from brain signals. Numerous studies have consistently refined methods to achieve image reconstruction that faithfully mirrors the presented image at the pixel level. An intriguing extension of these techniques is their potential application to subjective mental contents, a domain that has proven to be especially challenging. Here, we introduce a general framework that can be used to reconstruct subjective perceptual content. This framework translates or decodes brain activity into deep neural network (DNN) representations, and then converts them into images using a generator. Through our research on visual illusions—a classic form of subjective perception defined by a discrepancy between sensory inputs and actual perception— we demonstrate how we successfully reconstructed visual features that were absent in the sensory inputs. Our work shows the potential of reconstruction techniques as invaluable tools for delving into visual mechanisms. The use of natural images as training data and the choice of DNNs were key in obtaining successful reconstruction. While extensive research has probed the neural underpinnings of visual illusions using qualitative hypotheses, our approach materializes mental content into formats amenable to visual interpretation and quantitative analysis. Reconstructions from individual brain areas shed light on the strength of illusory representation and its shared representations with real features at different levels of processing stages, which provides a means to decipher the visual representations underlying illusory perceptions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×