Abstract
Eye-Centered visual codes have been observed throughout the dorsal visual stream and cortical saccade system, but the nature of visual coding in prefrontal cortex is less clear. We examined this question by recording single neurons from dorsolateral prefrontal cortex (DLPFC) in two monkeys trained to perform a head-unrestrained reaching paradigm. Animals touched one of three central LEDs at waist level while maintaining gaze on a central fixation dot and were rewarded if they touched a target appearing at one of 15 locations in a 40° x 20° (visual angle) array. Preliminary analysis of 228 neurons in one monkey showed an assortment of target/stimulus, gaze, pre-reach and reach related responses in DLPFC. Most neurons could be described as falling into three main groups: ‘Early” (increased firing rate during the target presentation and gaze onset), “Late” (increased firing rate near the end of the reach), and ‘early-late” responses that spanned both periods. Here, we focused on analysis of the ‘visual’ response of 92 spatially tuned ‘early response’ neurons, 80-180ms after visual target onset. We first tested for gaze, head, and hand gain fields during the different neuronal responses and after removing the gain field effects, we fitted the residual data against various spatial models related with target, eye, head, and arm. We found that the visual response best encoded the target relative to the space (Ts) at the population level with the target relative to the eye (Te) being significantly eliminated. At the single unit level, preferred fits were distributed across all three visual models (Te, Th, Ts) and when other motor models were tested, some ‘visual’ responses actually preferred parameters like future head position. These data suggest that early ‘visual’ responses in DLPFC show complex levels of spatial processing during reaching, perhaps for action planning.