Abstract
The neuroscience of visual object categorisation has revealed a number of spatially and temporally distinct neural representations of objects in the human brain, though has yet to sufficiently account for what factors or features the brain uses to delineate object subcategories. The animate/inanimate dichotomy is well established as an overarching organisational principle. Some have further suggested a representational continuum, specifically that groups objects based on similarities between biological classes (Connolly et al., 2012; Sha et al., 2014). While the latter may account for variability between animate subcategories, there has been limited evaluation of category structure within the inanimate domain. The neural representations of inanimate objects that exhibit animate features (e.g. human/animal-like robots or toys) have yet to be fully explored, and raise questions as to the potential contribution of more complex factors related to agency and experience which are known to influence human perception of these types of objects (Gray, Gray, & Wegner, 2007). Using magnetoencephalography and multivariate pattern analyses, we mapped the time course of object categorisation for 120 images across 12 object categories (6 animate, 6 inanimate). We evaluated the efficacy of both dichotomy and continuum models of object categorisation, as well as newly generated models based on agency and experience. Our results indicate that the presence of faces best accounts for the representation of object categories around the time of peak decoding (peak at ~180ms post stimulus onset), whereas later representations (peak at ~245ms) appear best explained by more complex factors related to the object's perceived similarity to humans. These findings call for a re-evaluation of models of object categorisation, to include more complex human-centered factors relating to agency and experience in the emerging representation of object categories in the human brain.
Meeting abstract presented at VSS 2016