August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Generic decoding of seen and imagined objects using features of deep neural networks
Author Affiliations
  • Tomoyasu Horikawa
    Computational Neuroscience Laboratories, ATR, Kyoto, Japan
  • Yukiyasu Kamitani
    Graduate School of Informatics, Kyoto University, Kyoto, Japan
Journal of Vision September 2016, Vol.16, 372. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tomoyasu Horikawa, Yukiyasu Kamitani; Generic decoding of seen and imagined objects using features of deep neural networks. Journal of Vision 2016;16(12):372.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Object recognition is a key function in both human and machine vision. Recent studies support that a deep neural network (DNN) can be a good proxy for the hierarchically structured feed-forward visual system for object recognition. While brain decoding enabled the prediction of mental contents represented in our brain, the prediction is limited to training examples. Here, we present a decoding approach for arbitrary objects seen or imagined by subjects by employing DNNs and a large image database. We assume that an object category is represented by a set of features rendered invariant through hierarchical processing, and show that visual features can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Furthermore, visual feature vectors predicted by stimulus-trained decoders can be used to identify seen and imagined objects (extending beyond decoder training) from a set of computed features for numerous objects. Successful object identification for imagery-induced brain activity suggests that feature-level representations elicited in visual perception may also be used for top-down visual imagery. Our results demonstrate a tight link between the cortical hierarchy and the levels of DNNs and its utility for brain-based information retrieval. Because our approach enabled us to predict arbitrary object categories seen or imagined by subjects without pre-specifying target categories, we may be able to apply our method to decode the contents of dreaming. These results contribute to a better understanding of the neural representations of the hierarchical visual system during perception and mental imagery.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.