Journal of Vision Cover Image for Volume 19, Issue 10
September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Humans and Machine Learning Classifiers Can Predict the Goal of an Action Regardless of Social Motivations of the Actor
Author Affiliations & Notes
  • Emalie G McMahon
    Laboratory of Brain and Cognition, National Institute of Mental Health
  • Charles Y Zheng
    Machine Learning Team, National Institute of Mental Health
  • Francisco Pereira
    Machine Learning Team, National Institute of Mental Health
  • Gonzalez Ray
    Department of Psychology, Harvard University
  • Ken Nakayama
    Department of Psychology, Harvard University
  • Leslie G Ungerleider
    Laboratory of Brain and Cognition, National Institute of Mental Health
  • Maryam Vaziri-Pashkam
    Laboratory of Brain and Cognition, National Institute of Mental Health
Journal of Vision September 2019, Vol.19, 219. doi:https://doi.org/10.1167/19.10.219
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emalie G McMahon, Charles Y Zheng, Francisco Pereira, Gonzalez Ray, Ken Nakayama, Leslie G Ungerleider, Maryam Vaziri-Pashkam; Humans and Machine Learning Classifiers Can Predict the Goal of an Action Regardless of Social Motivations of the Actor. Journal of Vision 2019;19(10):219. https://doi.org/10.1167/19.10.219.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do people predict the actions of others? Does social context affect prediction? What information enables prediction? To answer these questions, two participants (an “initiator” and his/her partner the “responder”) played a reaching game while video recorded. The social context was modified by asking the partners to either play competitively or cooperatively. Human subjects watched videos that were cut at different timepoints relative to when the initiator lifted his/her finger and predicted the direction of movement. Further, a support vector machine (SVM) classifier was trained to decode the direction of movement from optical flow of the videos. We found that humans and the SVM could predict the direction of movement well before the movement began. Both performed slightly better in cooperation than competition. An analysis of the speed of movement revealed that the advantage in the cooperative condition was due to slower movements. Finally, the performance of humans and the SVM was similar and correlated suggesting a simple algorithm based on instantaneous optical flow suffices to explain human levels of performance. Next, using a searchlight classification method on the videos, we investigated which pixels were most informative of the goal. The searchlight revealed that information is widely distributed throughout the body of the initiator. Further, the performance of the classifier generalized across social condition highlighting the similarity in the similarity of the distribution of information between cooperation and competition. In conclusion, our results show that subtle bodily adjustments prior to explicit execution of an action reveal action goals. Aside from the speed of movement, social context may not directly affect the availability of information. Thus, not only do we reveal our intentions in cooperation when communicating the goal is beneficial, but our movements may betray our action goals even when there is incentive to conceal the information.

Acknowledgement: NIH DIRP, NSF STC CCF-1231216 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×