September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Neural representation of translucent and opaque objects images in macaque inferior temporal cortex
Author Affiliations & Notes
  • Hoko Nakada
    National Institute of Advanced Industrial Science and Technology (AIST) , Human Informatics and Interaction Research Institute
  • Daiki Nakamura
    National Institute of Advanced Industrial Science and Technology (AIST) , Human Informatics and Interaction Research Institute
  • Ryusuke Hayashi
    National Institute of Advanced Industrial Science and Technology (AIST) , Human Informatics and Interaction Research Institute
  • Footnotes
    Acknowledgements  The study was supported by the Japan Science and Technology Agency, Moonshot Research & Development Program grant JPMJMS2012, and the National Institute of Information and Communications Technology (NICT) grant NICT 22301.
Journal of Vision September 2024, Vol.24, 461. doi:https://doi.org/10.1167/jov.24.10.461
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hoko Nakada, Daiki Nakamura, Ryusuke Hayashi; Neural representation of translucent and opaque objects images in macaque inferior temporal cortex. Journal of Vision 2024;24(10):461. https://doi.org/10.1167/jov.24.10.461.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Translucency and opacity are optical properties of materials characterized by the extent of light passing through them. Although recent psychophysical studies have revealed the visual properties associated with the perception of translucency, the neural substrates involved in its perception remain unknown. In this study, we conducted electrophysiological experiments using visual stimuli of objects with various shapes and varying degrees of translucency. To regulate the visual attributes systematically, we used the Translucent Appearance Generation (TAG) model, an unsupervised artificial neural network designed to synthesize images that represent the material appearance. The latent variables of this model encode human-interpretable visual attributes, such as object color in the fine-scale layer and shape in the coarse-scale layers. Notably, variables in the middle layers of the model encode feature information related to whether they appear translucent or opaque. We created visual stimuli by generating object images from sampled latent variables (27 points from coarse-scale layers and 7 points from middle layers) using the TAG model and converting them to grayscale. Mean luminance, Michelson contrast, and the area of the object region were equalized to minimize the effect of these factors on the experimental results. We recorded neural responses using four multi-electrode arrays (each consisting of 128 channels) from the inferior temporal cortex of one macaque monkey. The arrays were placed in three sites in the TE area, and one was in the TEO area. The results of representational similarity analysis showed higher correlations between semi-translucent images than between translucent and opaque images after averaging responses across different shapes. The results of unit-level response analysis indicated that some neurons exhibited selectivity to translucent or opaque objects. These findings suggest that neurons in the inferior temporal cortex represent differences in the object's appearance related to translucency or opacity at both population and unit levels.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×