Purchase this article with an account.
Heather L Dean, Michael L Platt; Spatial representations in posterior cingulate cortex. Journal of Vision 2003;3(9):427. doi: https://doi.org/10.1167/3.9.427.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Posterior cingulate cortex (CGp) is thought to participate in sensorimotor transformations linking visual stimuli with saccades. CGp is strongly connected with visual and premotor cortical areas, and CGp neurons respond following saccades. The activity of CGp neurons has previously been shown to be modulated by the position of the eye in the orbit as well as by saccade direction and amplitude. The goals of this study were to establish whether or not the timing of CGp responses depends on the timing of task events; to determine if the spatial structure of CGp responses can be quantified using gaussian or planar functions, as in other visuomotor areas; and to determine quantitatively which coordinate framework CGp neurons use to encode spatial information. To address the first two goals, single CGp neurons were studied while monkeys (M. mulatta) performed reaction-time and delayed-saccade trials guided by targets located throughout the central 36° of visual space. CGp neurons responded after contralateral target onset as well as after contraversive movement onset. Plots of firing rate against horizontal and vertical saccade amplitude (response fields) were well-described by tilted planes. To determine the coordinates in which CGp responses are anchored, subjects performed delayed-saccade trials initiated from different starting positions to targets appearing along an axis passing through the neuronal response field. Neuronal activity was measured during 11 sequential epochs on each trial, segregated by fixation position, and plotted as a function of both movement vector and final eye position. For most CGp neurons, tuning curves were better aligned when plotted as a function of final eye position than movement vector, suggesting that CGp encodes information in a head- or world-centered coordinate framework. In order to differentiate between these possibilities, tuning curves were then compared before and after rotating the monkey with respect to the visual display.
This PDF is available to Subscribers Only