December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Attenuated perception of visual stimuli synthesized from subspace neural activity
Author Affiliations & Notes
  • Guohua Shen
    Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan.
  • Shu Fujimori
    Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan.
  • Gowrishankar Ganesh
    Laboratoire d'Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM), Univ. Montpellier, CNRS, Montpellier, France.
  • Yoichi Miyawaki
    Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan.
    Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan.
  • Footnotes
    Acknowledgements  This research was partially supported by JST ERATO Grant Number JPMJER1701 (Inami JIZAI Body Project) and JSPS KAKENHI Grant Number 15K12623 and 20H00600.
Journal of Vision December 2022, Vol.22, 3880. doi:https://doi.org/10.1167/jov.22.14.3880
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Guohua Shen, Shu Fujimori, Gowrishankar Ganesh, Yoichi Miyawaki; Attenuated perception of visual stimuli synthesized from subspace neural activity. Journal of Vision 2022;22(14):3880. https://doi.org/10.1167/jov.22.14.3880.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual images are encoded by neural activity in the brain to generate our conscious perception. Given the large variety of spatial patterns, the neural activity should have a larger dimension than the visual images we observe in daily life. Thus we could hypothesize that there might be subspace neural activity that hasn’t been used to encode common visual experience. To examine visual perception corresponding to such hypothetical subspace neural activity, we synthesized visual stimuli corresponding to the subspace neural activity and quantified their visibility. For this purpose, we used a deep convolutional neural network as a proxy of the visual system of the human brain and identified the primary space and the subspace that encode 99% and 1% variance of activity in the early layer of the network, respectively. We then sampled activity patterns in the primary space and the subspace and back-projected them to the image pixel space to synthesize visual stimuli corresponding to the primary space (“primary space stimuli”) and the subspace (“subspace stimuli”), respectively. We presented these stimuli to participants and quantified contrast detection thresholds. We also presented a frequency-matched but phase-scrambled version of the primary space stimuli and subspace stimuli to control effects from spatial frequency difference between them. Results showed that subspace stimuli had higher contrast detection thresholds than the primary space stimuli. Control analyses further showed that contrast detection thresholds were higher for the subspace stimuli whereas lower for the primary space stimuli than their frequency-matched phase-scrambled stimuli, respectively, indicating that reduced visibility for the subspace stimuli is not due to spatial frequency difference but certain factors related to phase information embedded in the subspace stimuli. These results suggest the possibility that there are frequency-independent, non-classical stimulus configurations that fall into neural activity subspace and generate attenuated perception for human observers.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×