Abstract
There is mounting evidence that the brain uses predictive mechanisms to encode the position of moving objects. Recent theoretical work has shown that for hierarchical networks (e.g., the visual system) to accurately represent the position of moving objects under neural delays, predictions must be generated along both forwards and backwards pathways. An important consequence of this is the alignment of position representations across network layers. However, empirical evidence of this occurring for position representations in the human brain is lacking. In this study, we investigated the temporal dynamics of object-position representations over human visual cortex during smooth motion processing. Participants (N = 18, 2 sessions) viewed a stimulus moving along a circular trajectory, while EEG was recorded. Using multi-class LDA classification, we constructed high-resolution probabilistic maps of the stimulus’ location over time. These revealed clear evidence of ‘representational overshoot’ following the unexpected disappearance or reversal of the stimulus, indicative of predictive position encoding. Importantly, examining neural dynamics immediately following motion onset we found evidence of rapid temporal re-alignment occurring between distinct object-position representations. We show that a similar temporal shift emerges spontaneously at all layers of a simulated neural network via spike-timing-dependent plasticity – providing a simple account of this predictive effect. Ultimately, this study sheds light on the neural encoding of moving-object position, and constitutes the first empirical evidence of predictive temporal re-alignment occurring in the human brain.