Abstract
Object recognition is fast and reliable, and works even when our eyes are focused elsewhere. The aim of our study was to examine how the visual system compensates for degraded inputs in object recognition by looking at the time course of the brain's processing of naturally degraded visual object stimuli. The study used a set of 48 images depicting real world objects (24 animate and 24 inanimate). In experiment 1, we degraded the images by varying the simulated focus, such that each image was equally recognizable. In experiment 2, we presented the intact and out-of-focus images to participants, while their brain activity was recorded using magnetoencephalography (MEG). In the scanner, participants were asked to categorize the objects as animate or inanimate as quickly and accurately as possible. We predicted a behavioural reaction time effect and accordingly observed degraded objects were recognized 22ms slower. Time resolved multivariate pattern analysis was used to decode category (animacy) membership, as well as object identity for all possible pairwise exemplar comparisons as a function of time. In the decoding analysis, we observed lower decoding performance for degraded images overall; and the decoding onset and peak for degraded stimuli were 15ms slower. We assessed several models to explain the behavioural reaction time difference, including distance-based models, which predict reaction times based on exemplar decodability, as well as time-based models, which use the decoding onset and peak. Our analysis shows that distance based models are better predictors. These findings suggest that the time decodable information emerges is less important for determining reaction time behaviour than the quality of the representation (decodability).
Meeting abstract presented at VSS 2015