We first examined the IMCV decoding data in an unsupervised fashion using MDS (
Figures 2A and
2B). The distances between exemplars on each frame of the movie represent their dissimilarity in the brain's representation, i.e., decodability between exemplars. In using MDS, there is no presupposition of categorical structure. Nevertheless, categorical structure is prominent after the brain begins to process the stimuli. In the early time points (<60 ms), the exemplars are very close to one another, i.e., they are relatively indistinguishable based on brain activity. This is expected because the stimuli have not yet been presented before 0 ms, and the arrangement thus reflects the noise. Between 0 and 60 ms visual inputs from the retina have yet to reach the cortex (Aine, Supek, & George,
1995; Brecelj, Kakigi, Koyama, & Hoshiyama,
1998; Di Russo, Martinez, Sereno, Pitzalis, & Hillyard,
2002; Jeffreys & Axford,
1972; Nakamura et al.,
1997; Portin, Vanni, Virsu, & Hari,
1999; Supek et al.,
1999), thus decoding exemplars is still at chance. The arrangement changes dramatically at 80 ms. At this point, there is a marked increase in the distance between exemplars, which suggests that individual exemplars are decodable (statistical inference below). The arrangement also suggests categorical structure (grouping of points representing exemplars of the same category), which becomes more prominent after 120 ms. Note how in the figures the human face stimuli cluster and distance themselves from the group, and the monkey face stays with this group until 180 ms. At this point, the human face stimuli cluster alone, which is consistent with a distinct response to human faces (Bentin et al.,
1996; Liu et al.,
2002). From 120 ms onward, object categories appear to cluster to varying degrees. The most notable distinction is that between animate and inanimate objects. After 160 ms, the animate and inanimate exemplars separate and remain distinguished. This is compatible with Kriegeskorte et al. (
2008) and Kiani et al. (
2007), who used a similar approach to discover categorical structure using MDS, and a range of studies showing differences in the representation of categories in the brain (Caramazza & Shelton,
1998; Chan, Halgren, Marinkovic, & Cash,
2011; Chao, Haxby, & Martin,
1999; Epstein & Kanwisher,
1998; Kanwisher et al.,
1997; Konkle & Oliva,
2012; McCarthy,
1995; Shinkareva et al.,
2008). In particular, the animate/inanimate dichotomy emerges as a prominent division (Caramazza & Mahon,
2003).