September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Intention readout primes action categorization
Author Affiliations
  • Eugenio Scaliti
    Cognition, Motion and Neuroscience Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
  • Giulia Borghini
    Cognition, Motion and Neuroscience Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
  • Kiri Pullar
    Neural Computation Lab, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
  • Andrea Cavallo
    Department of Psychology, Università degli Studi di Torino, Torino, Italy
  • Stefano Panzeri
    Neural Computation Lab, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
  • Cristina Becchio
    Cognition, Motion and Neuroscience Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
Journal of Vision September 2021, Vol.21, 2629. doi:https://doi.org/10.1167/jov.21.9.2629
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Eugenio Scaliti, Giulia Borghini, Kiri Pullar, Andrea Cavallo, Stefano Panzeri, Cristina Becchio; Intention readout primes action categorization. Journal of Vision 2021;21(9):2629. https://doi.org/10.1167/jov.21.9.2629.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Variations in movement kinematics convey intention-related information. Human observers are able to exploit this information when explicitly prompted to do so. However, the question remains as to whether they spontaneously use this information to process the actions of other people. The present study was designed to address this question. Participants (n = 20) first completed a primed action categorization task. On each trial, they observed either a grasp-to-drink or grasp-to-pour act (prime) followed by a static picture of an agent drinking or pouring (target). The static picture could be congruent (75% of trials) or incongruent (25% of trials) with the intention of the previously observed grasp. Participants were asked to categorize the action displayed in the static picture as fast as possible whilst remaining accurate. This task served to establish whether spontaneous readout of intention information encoded in grasping kinematics facilitates action categorization. Next, participants completed an intention discrimination task wherein they were asked to discriminate the intention of the grasping acts used as primes in the action categorization task. Using a logistic regression fitted to intention discrimination data for each participant, we determined how intention-related information encoded in grasping kinematics is read out with single-trial resolution. Analysis of response times in the primed action categorization task showed that categorization responses were facilitated by congruent kinematic primes (priming effect: 32.4 ± 10.7 ms, mean ± SEM; t(19) = 3.02, p < .01). Importantly, the amount of facilitation varied with single-trial intention readout, such that kinematic primes that were more informatively read out by participants in the intention discrimination task induced larger priming effects (Pearson correlation between priming effect and amount of intention information readout: r = 0.13, p < .001). These findings demonstrate that intention-related information encoded in movement kinematics is implicitly readout and spontaneously used to process others’ actions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×