September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Neural processing of scene-relative object movement during self-movement
Author Affiliations & Notes
  • Xuechun Shen
    East China Normal University, Shanghai, China
    NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
  • ZhouKuiDong Shan
    New York University Shanghai, Shanghai, China
    NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
  • Simon Rushton
    Cardiff University, Cardiff, United Kingdom
  • ShuGuang Kuai
    East China Normal University, Shanghai, China
    NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
  • Li Li
    New York University Shanghai, Shanghai, China
    NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
  • Footnotes
    Acknowledgements  Supported by research grants from the National Natural Science Foundation of China (32071041, 32161133009, 32022031), China Ministry of Education (ECNU 111 Project, Base B1601), the major grant seed fund and the boost fund from NYU Shanghai, and UK Economic and Social Research Council (ES/S015272/1)
Journal of Vision September 2024, Vol.24, 969. doi:https://doi.org/10.1167/jov.24.10.969
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xuechun Shen, ZhouKuiDong Shan, Simon Rushton, ShuGuang Kuai, Li Li; Neural processing of scene-relative object movement during self-movement. Journal of Vision 2024;24(10):969. https://doi.org/10.1167/jov.24.10.969.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Much research has examined how the visual system identifies scene-relative object movement during self-movement. Here we examined the related neural processing by identifying brain regions involved in this task. In a Siemens Magnetom Prisma Fit 3T MRI scanner, participants viewed through prism glasses a stereo display (9.5°Hx19°V) that simulated lateral self-movement (speed: 0.032 m/s) through a 3D volume composed of 63 randomly positioned red wireframe objects (depth: 0.55-1.05 m) with counter rotation of gaze. In the non-moving target condition, a yellow target object was positioned at 1/4 (near) or 3/4 (far) of the scene’s depth range. In the moving target condition, the target at the near distance was given its retinal speed at the far distance and vice versa, causing the target to appear moving in the scene. The target movement was thus not defined by higher or lower speeds than the rest of the scene objects, and the moving and non-moving target conditions were equated for all retinal information. A control condition without simulated self-movement was also tested in which the scene remained static on the screen. During scanning, on each 2-s trial, participants were asked to report when the scene objects underwent a luminance contrast change to control attention (irrelevant to object movement identification). We identified known visual and optic flow areas as regions of interest (ROI) using standard localizers and performed multiple-voxel-pattern-analysis on the most active 300 voxels for each ROI. Across 20 participants, the decoding accuracy of scene-relative object movement versus no object movement was significantly higher than chance in higher-level dorsal visual areas V7 and MT+. Furthermore, these areas could successfully differentiate scene-relative object movement with and without simulated self-movement. Using well-designed visual stimuli, the current study reveals that areas V7 and MT+ play a crucial role in processing the scene-relative object movement during self-movement.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×