Purchase this article with an account.
Yi Chen, John-Dylan Haynes; Invariant decoding of object categories from V1 and LOC across different colors, sizes and speeds. Journal of Vision 2008;8(6):37. doi: 10.1167/8.6.37.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Categorical representations of objects in visual cortex have been intensively investigated using fMRI in humans. However it remains unclear to which degree specific brain regions encode objects invariant of their defining features. We approached the problem using 3D rendering of objects rotating along a randomly changing axis. Support vector machine (SVM) based pattern classification algorithms were used in combination with a spherical searchlight technique to decode objects from fMRI signals. We investigated to which extent changes of size, color and rotation speed of objects affected the accuracy with which they could be decoded. The degree of generalization of classifiers across different conditions was further assessed by training the classifier on one scale or rotation speed and testing it on the other. As expected object selectivity in temporal cortex showed much higher generalization than in retinotopic cortex. We also predicted that decoding of rotating images across different trajectories would be possible only in LOC but not in V1 where the time-average pattern should be identical for all objects and thus should not provide any discriminative information. Interestingly, contrary to this expectation, decoding accuracy for objects in V1 was above chance for all speeds except for a static control condition where decoding was attempted across multiple static 3D renderings, suggesting that static rather than dynamic images provide the best way to distinguish V1 and LOC. In summary, our results support the notion that object representations in temporal cortex can be decoded independently of their precise spatial representation in retinotopic regions.
This PDF is available to Subscribers Only