Purchase this article with an account.
Serena De Stefani, Sam Sohn, Mubbasir Kapadia, Jacob Feldman, Peter Pantelis; Perception of Intentionality in Avatars and AI Agents. Journal of Vision 2018;18(10):49. doi: 10.1167/18.10.49.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Many studies have demonstrated that motion can convey intentionality and mental goals, particularly when it is interpreted as originating from an animate agent (Tremoulet & Feldman, 2000). But an AI agent can also convey the intention to do something if it mimics human behavior in the right ways. In these studies, we sought to understand what aspects of behavior are particularly effective at distinguishing human from robot behavior, or in making them seem equivalent. We wondered in particular whether the efficiency with which a task is accomplished might influence judgments of humanness. We chose to study the traveling salesman task, which can be solved optimally by both computers and humans (at least for small numbers of locations) albeit with different computational strategies (MacGregor & Chu, 2000). In this task, the optimality of a solution can be quantified as the ratio between the total length of a solution and the the length of the optimal solution. We also studied other potential factors influencing such judgments, including the sequence in which the targets are visited, the duration of movements between successive targets, the direction of the agent's gaze, and the number of targets viewed while completing the task. In a series of experiments, we recorded an AI agent's behavior at different levels of optimality, and then asked human subjects to evaluate its intelligence, its planning capacity, and whether it was controlled by a human being or not. We also asked other human subjects to solve the same task using a virtual avatar, and asked other participants to evaluate their performances in a similar fashion. Results of initial studies show that both optimality and gaze behavior are correlated with the perceived humanness of the agent, suggesting that computational efficiency and gaze variance may serve as cues for distinguishing human from artificial intelligence.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only