Purchase this article with an account.
Frank Tong, Yukiyasu Kamitani; Neural decoding of seen and attended motion directions from human cortical activity. Journal of Vision 2006;6(6):825. doi: 10.1167/6.6.825.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In recent fMRI studies, we have shown that ensemble activity in human visual cortex contains robust information that allows for accurate decoding of seen and attended visual orientations (Kamitani & Tong, Nature Neuroscience, 2005). By pooling weak feature-selective information from many voxels, we can obtain robust measures of ensemble feature selectivity. Here, we investigated whether ensemble activity patterns in human visual areas contain reliable information about seen and attended motion directions. fMRI activity was monitored while subjects viewed random dots drifting in 1 of 8 directions. A linear decoder was trained to classify activity patterns induced by different motion directions, then tested on independent test data. Ensemble activity from individual areas (V1–V4, MT+) led to reliable decoding of seen motion direction. In comparison, orientation-decoding performance was highly precise for V1 and V2 but at chance level for MT+. Next, we tested if it is possible to decode which of two overlapping motion directions is the focus of the subject's attention. The decoder was trained on single motion directions (clockwise or counterclockwise), then tested with ambiguous displays containing both overlapping directions while subjects performed a speed discrimination task on one of the two sets of moving dots. Direction-selective ensemble activity from all visual areas was reliably biased towards the attended motion direction, and reliable for individual areas V1, V2, and MT+. Our results indicate that human visual areas are sensitive to different motion directions, and their activity can be reliably biased by feature-based attention.
This PDF is available to Subscribers Only