Abstract
Previous studies using electrophysiological recordings have identified the time course of category representation during the first several hundred milliseconds of object recognition, but less is known about the perceptual and semantic features reflected by this information (Cichy et al., 2016, Clarke et al., 2012). Here we apply machine learning methods and representational similarity analysis (RSA) to MEG recordings in order to elucidate the temporal evolution of representations for concrete visual objects. During MEG recording, 32 participants were repeatedly presented with object stimuli while completing a visual oddball task. Half of the participants were exposed to one set of 84 object exemplars, while the other half was presented with different exemplars of the same concepts. The 84 object concepts were selected based on lexical frequency. We used a support vector classifier to produce pairwise decoding accuracies between all object items at all time points, which served as dissimilarity matrices for later analyses. Complementary behavioral data from an object arrangement task were included in our analyses, as well as model predictions from a semantic model and a convolutional neural network. MEG analyses showed robust pairwise decoding of object images, peaking around 100 ms post-stimulus onset. Before 150 ms, the MEG data contained information similar to the the early layers of a convolutional neural network (CNN), suggesting early discriminability in patterns of neural activity based on visual information before 150 ms. From 200-450 ms, the MEG data show persistent similarity across visual exemplars for the same concept. Further, there was high correlation with the behavioral data, mid-level CNN layers, and the semantic model. Together, these results suggest the emergence of an abstract behaviorally-relevant representation of concrete object concepts peaking between 250-300 ms.
Meeting abstract presented at VSS 2017