Abstract
Paying attention to object features such as motion, contours, or colours can enhance visual processes throughout the visual field. Relatedly, when we grasp an object, processing visual properties of the object are necessary to allow us to successfully execute the grasp. However, few studies have reported attention-induced changes to the visual computations of object features, specifically in terms of temporal dynamics. Here, we aimed to clarify the time frames of attention to object features using multivariate EEG analyses. We recorded electrophysiological signals from 64 scalp electrodes in human participants while they viewed and acted upon real objects. Objects had one of two shapes (“flower” and “pillow”), and materials (steel and wood), respectively. To manipulate attention to these features, participants either grasped and lifted these objects, or touched them with their knuckle, thus making shape and material more or less relevant to the task. We then performed pattern classification of shape and material based on spatiotemporal EEG data. We found that classifiers reached transient accuracies around 100-200 ms after stimulus presentation. Shape classification was more robust than material, but there was not a marked difference in classification performance between tasks. However, the cross-temporal generalization of shape representations revealed that only for grasping did early and late neural generators reactivate one another during action planning. In contrast, knuckling shape computations involved a chained activation of generators. Our findings suggest that task-related attention does indeed modulate the visual processing of shape such that earlier representations are stored and reactivated during action planning, only if they are important to the task at hand.