September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Perceptual haptic representation of materials emerges from efficient encoding
Author Affiliations & Notes
  • Anna Metzger
    Justus-Liebig University Giessen
  • Matteo Toscani
    Justus-Liebig University Giessen
  • Footnotes
    Acknowledgements  This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 222641018 – SFB/TRR 135.
Journal of Vision September 2021, Vol.21, 2617. doi:https://doi.org/10.1167/jov.21.9.2617
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Metzger, Matteo Toscani; Perceptual haptic representation of materials emerges from efficient encoding. Journal of Vision 2021;21(9):2617. https://doi.org/10.1167/jov.21.9.2617.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It was proposed that perceptual representations emerge from learning to efficiently encode sensorial input. For instance, the highly correlated excitations of the long- and middle-wavelength-sensitive cones in the retina are transformed into an efficient decorrelated representation (i.e. two color-opponent and a luminance channel). There is currently a lot of interest whether higher level representations can also be learned by efficient encoding of the sensorial input in vision and other senses. Here we test the hypothesis that haptic perceptual representation of materials emerges from efficient encoding. When touching the surface of a material, its spatial structure translates into a vibration on the skin. The perceptual system evolved to translate this pattern into a representation that allows to distinguish between different materials. We trained a deep neural network with unsupervised learning (Autoencoder) to reconstruct vibratory patterns elicited by human haptic exploration of 108 samples of different materials. The learned compressed representation (i.e. latent space) allows for classification of material categories (i.e. plastic, stone, wood, fabric, leather/wool, paper, and metal). More importantly, distances between these categories in the latent space resemble perceptual distances (computed from human judgments of material properties, e.g. roughness, after visual or haptic exploration), suggesting a similar coding. These results support the idea that perceptual representations emerge from unsupervised learning as a consequence of efficient encoding of the sensory input. We could further show that the temporal tuning of the emergent latent dimensions of the Autoencoder is similar to properties of human tactile receptors (Pacinian and Rapidly Adapting afferents), suggesting that our tactile sensors evolved to efficiently encode the statistics of natural textures as they are sensed through vibrations. We could replicate our findings with four different networks, suggesting that our results do not depend on the choice of the network’s architecture.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×