Abstract
In a fraction of a second, humans can recognize a wide range of actions performed by others. Yet actions pose a unique complexity challenge, bridging visual domains and varying along multiple perceptual and semantic features. What features are extracted in the brain when we view others’ actions, and how are they processed over time? I will present electroencephalography work using natural videos of human actions and rich feature sets to determine the temporal sequence of action perception in the brain. Our work shows that action features, from visual to semantic, are extracted along a temporal gradient, and that different processing stages can be dissociated with artificial neural network models. Furthermore, using a multimodal approach with video and text stimuli, we show how conceptual action representations emerge in the brain. Overall, these data reveal the rapid computations underlying action perception in natural settings. The talk will highlight how a temporally resolved approach to natural vision can uncover the neural computations linking perception and cognition.