October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Transformations of Object Representations Across the Human Visual Processing Hierarchy
Author Affiliations & Notes
  • Viola Mocz
    Yale University
  • Maryam Vaziri-Pashkam
    National Institute of Mental Health
  • Marvin Chun
    Yale University
  • Yaoda Xu
    Yale University
  • Footnotes
    Acknowledgements  Grant information: NIH 1R01EY022355
Journal of Vision October 2020, Vol.20, 1262. doi:https://doi.org/10.1167/jov.20.11.1262
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Viola Mocz, Maryam Vaziri-Pashkam, Marvin Chun, Yaoda Xu; Transformations of Object Representations Across the Human Visual Processing Hierarchy. Journal of Vision 2020;20(11):1262. https://doi.org/10.1167/jov.20.11.1262.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Previous work has shown that we can derive linear transformation functions within human lateral occipital cortex for affine changes (i.e. size and viewpoint) of objects that can then predict the neuronal response of such changes for new object categories (Ward et al., 2018, J Neurosci). In the current study, we explored such transformations within brain regions of interest (ROIs) along the ventral stream of the human visual hierarchy, including V1, V2, V3, V4, ventral occipitotemporal cortex (VOT), and lateral occipitotemporal cortex (LOT). We examined data from four existing fMRI experiments (Vaziri-Pashkam and Xu, 2018, Cereb Cortex; Vaziri-Pashkam et al., 2018, J Cogn Neurosci) and analyzed four types of transformations: 1) original format vs. controlled format (equalized image contrast, luminance and spatial frequency across all categories using the SHINE toolbox in Matlab), 2) appearing above fixation vs. below fixation, 3) small size vs. large size, and 4) high spatial frequency vs. low spatial frequency. Using linear transformation, we can successfully predict neural responses between these four types of transformations throughout the human ventral visual pathway. However, for the transformations of position, size, and spatial frequency, we can only generalize the learned transformations to a new object category in LOT and VOT but not in early visual areas, whereas for the transformation of original vs. controlled format, we can generalize the learned transformations to a new object category in all ventral ROIs examined. These results indicate that higher-level visual regions represent transformations in a category-independent manner, while lower-level visual regions largely represent transformations in a category-dependent manner.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.