September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Perception of Intentionality in Avatars and AI Agents
Author Affiliations
  • Serena De Stefani
    Psychology Department, Rutgers University
  • Sam Sohn
    Computer Science Department, Rutgers University
  • Mubbasir Kapadia
    Computer Science Department, Rutgers University
  • Jacob Feldman
    Psychology Department, Rutgers University
  • Peter Pantelis
    Psychology Department, Rutgers University
Journal of Vision September 2018, Vol.18, 49. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Serena De Stefani, Sam Sohn, Mubbasir Kapadia, Jacob Feldman, Peter Pantelis; Perception of Intentionality in Avatars and AI Agents. Journal of Vision 2018;18(10):49.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Many studies have demonstrated that motion can convey intentionality and mental goals, particularly when it is interpreted as originating from an animate agent (Tremoulet & Feldman, 2000). But an AI agent can also convey the intention to do something if it mimics human behavior in the right ways. In these studies, we sought to understand what aspects of behavior are particularly effective at distinguishing human from robot behavior, or in making them seem equivalent. We wondered in particular whether the efficiency with which a task is accomplished might influence judgments of humanness. We chose to study the traveling salesman task, which can be solved optimally by both computers and humans (at least for small numbers of locations) albeit with different computational strategies (MacGregor & Chu, 2000). In this task, the optimality of a solution can be quantified as the ratio between the total length of a solution and the the length of the optimal solution. We also studied other potential factors influencing such judgments, including the sequence in which the targets are visited, the duration of movements between successive targets, the direction of the agent's gaze, and the number of targets viewed while completing the task. In a series of experiments, we recorded an AI agent's behavior at different levels of optimality, and then asked human subjects to evaluate its intelligence, its planning capacity, and whether it was controlled by a human being or not. We also asked other human subjects to solve the same task using a virtual avatar, and asked other participants to evaluate their performances in a similar fashion. Results of initial studies show that both optimality and gaze behavior are correlated with the perceived humanness of the agent, suggesting that computational efficiency and gaze variance may serve as cues for distinguishing human from artificial intelligence.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.