Abstract
Previous electrophysiology and imaging studies in monkeys (Perrett et al., 1989; Fogassi et al., 2005; Nelissen et al., 2011) looking into the neuronal correlate of action understanding, have shown brain responses during passive action observation in superior temporal sulcus, parietal, premotor and frontal regions. However, from these studies it is difficult to draw conclusions on the monkeys’ ability to understand the goal of the observed action, since these experiments required only passive fixation. Therefore, using active categorization tasks, we investigated rhesus monkeys’ ability to categorize videos of hand-object actions according to the goal of the action. Rhesus monkeys were trained to categorize videos according to the goal of the action (grasping versus not-grasping). After a 3 sec video presentation, the video disappeared and was followed by presentation of two peripheral target points. Monkeys were required to make either a leftward or rightward saccade, depending on the goal depicted in the action video (grasping or not-grasping, respectively). After performance on the categorization task was proficient, monkeys were tested on their ability to generalize to other untrained grasping or not-grasping actions. After training, performance on the categorization task was above 90% correct. More importantly, generalization tests demonstrated monkeys’ ability to recognize the goal of the action depicted in the videos, even for a wide range of untrained videos. Monkeys were proficient at correctly categorizing both new untrained grasping actions with different grip types, objects or biological effectors (either a male human or monkey hand), as well as new not-grasping actions including mimicked grasps and open hands touching the object. However, monkeys failed to generalize to videos showing an artificial prosthetic hand grasping an object. These results demonstrate the feasibility to use cognitive more demanding active action recognition tasks in rhesus monkeys to investigate the neuronal correlates of action understanding.
Meeting abstract presented at VSS 2013