Abstract
The prevailing view of the ventral visual system is that object representations tolerant to particular viewing conditions such as location in the visual field emerge in high-level visual area IT, whereas object properties particular to the viewing situation are represented in low-level visual areas. Contrary to this theory, a recent study using single-cell electrophysiology and modelling in monkeys (Hong et al., 2016) observed that IT represents both object identity and location information, and that low-level visual areas might fail to represent object properties particular to the viewing situation when objects appear under real-world cluttered viewing conditions. Here, we investigated the processing of object location and category and its dependence on the nature of the background of the visual scene in the human brain using EEG and multivariate pattern classification. The rationale was that the latency with which object category and location representations emerge in the human brain indicate the processing stage in the ventral visual stream at which they emerge. In the experiment, participants (N=25) viewed object images from four different categories, in four different locations displayed in three background conditions (high-, low-, and no-clutter) (Fig. 1A). We found that representations of object category (Fig 1B) and object location (Fig. 1C) emerged later in time when objects were presented on cluttered backgrounds (20 ms later for category, and 70 ms later for location). Further analysis comparing the temporal dynamics with which location representations emerged in the clutter- vs. no-clutter condition revealed similar representations shifted in time, rather than different representations (Fig. 1D). Together, our findings show that contrary to the prevailing theory of object recognition in cortex, under real-world viewing conditions not only category, but also object location is represented in late processing stages.
Meeting abstract presented at VSS 2018