Journal of Vision Cover Image for Volume 25, Issue 5
April 2025
Volume 25, Issue 5
Open Access
Optica Fall Vision Meeting Abstract  |   April 2025
Poster Session: Development of a natural wideview 3D scene fMRI dataset for modeling human spatial cognition
Author Affiliations
  • Joseph Obriot
    Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology
  • Pei-Yin Chen
    Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology
  • Atsushi Wada
    Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology
Journal of Vision April 2025, Vol.25, 16. doi:https://doi.org/10.1167/jov.25.5.16
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joseph Obriot, Pei-Yin Chen, Atsushi Wada; Poster Session: Development of a natural wideview 3D scene fMRI dataset for modeling human spatial cognition. Journal of Vision 2025;25(5):16. https://doi.org/10.1167/jov.25.5.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent research shows that deep neural networks (DNNs) trained for object recognition can predict neural responses to natural stimuli with unprecedented accuracy, serving as computational models of hierarchical visual processing along the ventral visual stream. Several functional brain datasets have been published to facilitate this DNN modeling approach, which compile neural responses to large-scale natural image datasets. However, their application in spatial cognitive processing, especially within the dorsal visual stream, remains underexplored. Here, we propose a novel dataset that combines fMRI and wideview stereoscopic presentation of natural 3D scenes, which reflects conditions known to facilitate spatial cognitive functions. The stimuli consisted of movie clips of indoor 3D scenes with 3D observer motion, generated using Habitat-Sim, a real-world simulator for training embodied AIs. To preserve geometrical accuracy in spatial 3D structure, the viewing angle and participant-wise interpupillary distance were set identically between rendering and presentation. Training and test data were acquired in separate scanning runs, each presenting the scene movie clips continuously. Preliminary results show voxels with high explainable variance across both ventral and dorsal visual cortical areas extending to the far periphery, indicating the potential of the dataset for quantitative and high-dimensional modeling of visuo-spatial processing involved in human spatial cognition.

Footnotes
 Funding: This research is supported by JSPS Kakenhi grants 21H04896
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×