Purchase this article with an account.
Viola Mocz, Maryam Vaziri-Pashkam, Marvin Chun, Yaoda Xu; Transformations of Object Representations Across the Human Visual Processing Hierarchy. Journal of Vision 2020;20(11):1262. doi: https://doi.org/10.1167/jov.20.11.1262.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Previous work has shown that we can derive linear transformation functions within human lateral occipital cortex for affine changes (i.e. size and viewpoint) of objects that can then predict the neuronal response of such changes for new object categories (Ward et al., 2018, J Neurosci). In the current study, we explored such transformations within brain regions of interest (ROIs) along the ventral stream of the human visual hierarchy, including V1, V2, V3, V4, ventral occipitotemporal cortex (VOT), and lateral occipitotemporal cortex (LOT). We examined data from four existing fMRI experiments (Vaziri-Pashkam and Xu, 2018, Cereb Cortex; Vaziri-Pashkam et al., 2018, J Cogn Neurosci) and analyzed four types of transformations: 1) original format vs. controlled format (equalized image contrast, luminance and spatial frequency across all categories using the SHINE toolbox in Matlab), 2) appearing above fixation vs. below fixation, 3) small size vs. large size, and 4) high spatial frequency vs. low spatial frequency. Using linear transformation, we can successfully predict neural responses between these four types of transformations throughout the human ventral visual pathway. However, for the transformations of position, size, and spatial frequency, we can only generalize the learned transformations to a new object category in LOT and VOT but not in early visual areas, whereas for the transformation of original vs. controlled format, we can generalize the learned transformations to a new object category in all ventral ROIs examined. These results indicate that higher-level visual regions represent transformations in a category-independent manner, while lower-level visual regions largely represent transformations in a category-dependent manner.
This PDF is available to Subscribers Only