October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Do deep networks encode a similar representation compared to a model of target kinematics during an interception task?
Author Affiliations
  • Kamran Binaee
    University of Nevada, Reno
Journal of Vision October 2020, Vol.20, 1722. doi:https://doi.org/10.1167/jov.20.11.1722
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kamran Binaee; Do deep networks encode a similar representation compared to a model of target kinematics during an interception task?. Journal of Vision 2020;20(11):1722. https://doi.org/10.1167/jov.20.11.1722.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The current state of the art deep networks show superior performance over classic object recognition methods and in some cases, their accuracy surpasses the human level. However, it is not clear to what extent these deep networks are able to mimic the properties of the human visual system. Previous studies have reported resemblance between the human visual system and deep neural networks by visualizing the trained filters at each layer, however, there are studies that show significant differences between human performance and deep networks in the context of shape discrimination i.e. match-to-sample tasks that require a more complex representation of the object properties. Therefore, it is not clear how these networks perform when the representation is meant to be used for guiding the action i.e. hand movement to intercept a ball. In order to investigate this, we used the egocentric VR screen images from a previously published Virtual Reality (VR) ball catching dataset where the subjects attempted to intercept a VR ball flying in depth. The images were fed to a pre-trained deep network and the extracted deep features were used to train an SVM regression model in order to reproduce the position of hand as ground truth. For comparison, a second SVM model was trained using the calculated features from the kinematics of the ball motion i.e. angular size, velocity, acceleration, and expansion rate. Our results show that the cross-correlation between the activation pattern of deep features and the kinematics features is highest in the first few initial layers of the deep network. This suggests that the initial layers of a deep network when compared to a model of target kinematics, encodes a similar representation of the visual information appropriate for guiding the movement of the hand for a target interception task.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×