Purchase this article with an account.
Shinji Nishimoto, An Vu, Jack Gallant; Decoding human visual cortical activity evoked by continuous time-varying movies. Journal of Vision 2009;9(8):667. doi: 10.1167/9.8.667.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In a recent study from our laboratory (Kay et al., Nature 2008, v.452, 352–355) we showed that brain activity measurements could be used to identify which specific static natural image was seen by an observer, even if the image was selected at random from a database consisting of thousands of such images. Here we demonstrate identification of continuous time-varying natural movies from brain activity measurements. We used fMRI to measure brain activity of human observers while they watched continuous, time-varying natural movies. We describe how stimuli are mapped onto measured brain activity in early visual areas by means of an explicit, spatio-temporal encoding model that is fit individually to the data from each voxel. The fitted models for voxels in early visual areas are typically spatio-temporally localized and frequency bandpass. When these models are used to perform movie identification (on a separate set of movies that were not used in fitting), we can identify which specific 20-second movie was seen by an observer with almost perfect accuracy. Furthermore, we can identify one-second movie clips to within +/− one second of their position in the original movie. Our results demonstrate that appropriate voxel-based encoding models can recover relatively fine spatio-temporal information about continuous visual experiences from brain activity measurements. We speculate that it might soon be possible to use similar techniques to reconstruct continuous visual experiences directly.
This PDF is available to Subscribers Only