September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Object size and depth representations in human visual cortex
Author Affiliations & Notes
  • Mengxin Ran
    The Ohio State University
  • Zitong Lu
    Laboratory of Neuropsychology, National Institute of Mental Health, NIH
  • Julie D. Golomb
    Johns Hopkins University
  • Footnotes
    Acknowledgements  NIH R01-EY025648 (JG), NSF 1848939 (JG), Center for Cognitive & Behavioral Brain Imaging ADNiR scholar (MR)
Journal of Vision September 2024, Vol.24, 1334. doi:https://doi.org/10.1167/jov.24.10.1334
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mengxin Ran, Zitong Lu, Julie D. Golomb; Object size and depth representations in human visual cortex. Journal of Vision 2024;24(10):1334. https://doi.org/10.1167/jov.24.10.1334.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

One of the key abilities in human object perception is maintaining a reliable representation of an object’s real-world size across various distances and perspectives. Previous research has indicated that neural responses in the ventral temporal cortex reflect object real-world size representations. However, the stimuli used in these prior studies confounded two related properties: perceived real-world size and real-world depth (distance). Moreover, the stimuli did not include naturalistic backgrounds, preventing us from exploring visual mechanisms in more ecological conditions. Bridging this limitation, a recent study from our group conducted a model-based representational similarity analysis on EEG data from a large-scale dataset of subjects viewing natural images featuring objects of varying retinal sizes and depths. The EEG study successfully disentangled a distinct timeline of processing objects real-world size and real-world depth. To better understand object representations in human brain regions with better spatial resolution, our current study applies a similar analysis approach to fMRI data, aiming to explore how different parts of human visual cortex represent objects real-world size and depth information in natural images. Applying our model-based representational similarity analysis on the THINGS fMRI dataset, we isolated neural representations specific to real-world size, real-world depth, and retinal size across human visual cortex. We found the most robust real-world depth representations in scene-selective regions such as the Parahippocampal Place Area (PPA) and the Transverse Occipitial Sulcus (TOS), and the most robust real-world size representations in middle-level visual regions, such as the V4, V3A and V3B. Our study delineates how various regions in human visual cortex are involved in processing different object size and depth features via an advanced computational approach, which offers an insightful understanding of the human brain processing of object information within naturalistic images.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×