Abstract
Joint attention is a fundamental ability of humans and other social primates. Gaze direction could be informative of the behavioral relevance of objects in the environment. Paradigms to explore joint attention usually involve measurements of gaze in at least two subjects making it difficult to conduct with non-human primates. Here we show a novel paradigm using avatars presented in a virtual environment while exploring gaze behavior and joystick responses of monkeys in an experimental setup. We trained two rhesus monkeys to respond to a human avatar's attention by moving a joystick towards the gazed-at object. We designed our social cues by applying natural eye and head movements on a highly realistic human avatar (released by Reallusion). Each trial commenced with the avatar gazing at the animal, while four identical objects were presented at the screen’s corners. After 500 ms, the avatar randomly shifted gaze towards one object, cueing the animal to move the joystick toward that object to obtain a juice reward. In 10% of the trials, the avatar gaze moved in between two objects. Eye positions were measured using EyeLink (SR Research). The animals followed the avatar’s gaze, achieving 90% correct trials. We trained a classifier to identify the animals' choice from their eye positions during the cue period (350 ms), achieving an 80% accuracy. The classifier’s accuracy decreased to 53% in catch trials when the avatar directed gaze to intermediate positions between objects. Here, the animals chose one of the two objects closest to the avatar’s gaze final position, further indicating the animals followed the avatar gaze cues. Our results demonstrate the use of human avatars in experimental setups to explore joint attention in macaque monkeys. It also demonstrates a degree of cognitive flexibility and extrapolation of human gaze cues in macaques.