Abstract
Any neural representation of a visual stimulus is necessarily uncertain, due to sensory noise and ambiguity in the inputs. Recent advances in fMRI decoding provide a principled approach for estimating the amount of uncertainty in cortical stimulus representations (van Bergen, Ma, Pratte & Jehee, 2015, Nature Neuroscience). However, these previous findings were limited to orientation perception. We here demonstrate that a similar decoding approach can be used to characterize the degree of uncertainty in cortical representations of motion. Participants viewed stimuli consisting of dots that moved coherently into a random direction, while their brain activity was measured with fMRI. Shortly after the dots disappeared from the screen, observers reported the direction of motion. Using a probabilistic analysis approach, we decoded the posterior probability distribution of motion direction from activity in visual areas V1–V4, and hMT+. We found that the decoded posterior distributions were generally bimodal, with two distinct peaks separated by roughly 180 degrees. One peak was reliably larger than the other, and centered on the true direction of stimulus motion. This bimodality suggests that the decoded distributions reflected not only motion, but also orientation information in the patterns of cortical activity. To assess the degree to which the decoded distributions reflected perceptual uncertainty, we computed the entropy of the decoded distributions. We compared this measure of uncertainty with behavioral variability, reasoning that a more precise representation in cortex should result in less variable (more precise) behavior. We found that uncertainty decoded from visual cortex was linked to participant behavior. Specifically, the entropy of the decoded distributions reliably predicted the variability in the observers’ estimates of motion direction. This suggests that the precision of motion perception can be reliably extracted from activity in human visual cortex.
Acknowledgement: This work was supported by a Radboud Excellence Fellowship (A.C.) and ERC Starting Grant 677601 (J.J.).