Abstract
Successfully perceiving and understanding others’ body movements supports a variety of perceptual and cognitive processes such as recognizing people, understanding events, detecting communicative intent, inferring intentions, feeling compassion and empathy toward others (Blake & Shiffrar, 2007). Despite intense interest from various sub-fields of cognitive neuroscience, there are many unknowns regarding how our brains support these functions. Here, we investigated the temporal characteristics of neural processing during action perception using electroencephalography (EEG). We used a novel stimulus set of well-matched human and humanoid robot actions to study the role of visual form and motion kinematics of the observed agent during action processing. Event-related brain potentials (ERP) were recorded while participants viewed 2s videos of three agents (Human, Android, Robot) performing recognizable actions: Human had biological form and motion, Android had biological form and non-biological motion, Robot had non-biological form and non-biological motion. Android and Robot were the same moving machine disguised via two different appearances, and thus featured identical kinematics. We found distinct neural signatures for processing of biological form and motion, as well as for congruence of form and motion. Form-sensitive modulation was characterized by 1) a negativity between 210-400 ms over centro-parietal, central, and fronto-central regions bilaterally, 2) a positivity between 270-370 ms over left parietal areas, both more pronounced for Robot compared with Human and Android. There was some evidence for biological motion-sensitivity between 130-230 ms, over left parieto-occipital regions, Human being more pronounced than Android. There was also evidence for a neural signature for processing of form-motion congruence in frontal regions between 150-250 ms, where the Android condition differed from Robot and Human. These results highlight differential spatiotemporal cortical patterns in action perception that depend on the viewed agent’s form and motion kinematics.
Meeting abstract presented at VSS 2012