August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Decoding human visual cortical activity evoked by continuous time-varying movies
Author Affiliations
  • Shinji Nishimoto
    Helen Wills Neuroscience Institute, University of California at Berkeley
  • An Vu
    Program in Bioengineering, University of California at Berkeley
  • Jack Gallant
    Helen Wills Neuroscience Institute, University of California at Berkeley, and Program in Bioengineering, University of California at Berkeley
Journal of Vision August 2009, Vol.9, 667. doi:https://doi.org/10.1167/9.8.667
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shinji Nishimoto, An Vu, Jack Gallant; Decoding human visual cortical activity evoked by continuous time-varying movies. Journal of Vision 2009;9(8):667. https://doi.org/10.1167/9.8.667.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In a recent study from our laboratory (Kay et al., Nature 2008, v.452, 352–355) we showed that brain activity measurements could be used to identify which specific static natural image was seen by an observer, even if the image was selected at random from a database consisting of thousands of such images. Here we demonstrate identification of continuous time-varying natural movies from brain activity measurements. We used fMRI to measure brain activity of human observers while they watched continuous, time-varying natural movies. We describe how stimuli are mapped onto measured brain activity in early visual areas by means of an explicit, spatio-temporal encoding model that is fit individually to the data from each voxel. The fitted models for voxels in early visual areas are typically spatio-temporally localized and frequency bandpass. When these models are used to perform movie identification (on a separate set of movies that were not used in fitting), we can identify which specific 20-second movie was seen by an observer with almost perfect accuracy. Furthermore, we can identify one-second movie clips to within +/− one second of their position in the original movie. Our results demonstrate that appropriate voxel-based encoding models can recover relatively fine spatio-temporal information about continuous visual experiences from brain activity measurements. We speculate that it might soon be possible to use similar techniques to reconstruct continuous visual experiences directly.

Nishimoto, S. Vu, A. Gallant, J. (2009). Decoding human visual cortical activity evoked by continuous time-varying movies [Abstract]. Journal of Vision, 9(8):667, 667a, http://journalofvision.org/9/8/667/, doi:10.1167/9.8.667. [CrossRef]
Footnotes
 Support: NIH/NEI
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×