Abstract
The posterior parietal area V6A in monkey is a key node of the dorso-medial visual stream. V6A is heavily involved in sensory-motor transformations and is modulated by a plethora of different factors during visually guided reaching tasks. Most past works studied the encoding of gaze position/saccades or reaching movement separately. Although traditionally this simplification makes data more interpretable, it fails in providing an overall picture of V6A multimodal representations. Thanks to the flexibility offered by Generalized Linear Models (GLMs), these separate approaches can be combined in a unique framework, thus making it possible to study how a variety of information is encoded in individual cells. In the present study, we recorded 181 neurons from V6A in two Macaca fascicularis monkeys while the animals performed in darkness a delayed reaching task towards 9 visual targets placed at different directions and depths. We then built, for each cell separately, a Poisson GLM that included variables related to gaze and arm movements. The fitted models were able to explain neural activity before the task, when eye movements were still allowed, as well as during the reaching, when gaze was blocked on the visual target. We finally computed a ‘functional fingerprint’ representative of each neuron modulation. We found that variables related to gaze position as well as arm movement were randomly distributed and mixed in V6A neural population, rather than being segregated in different subpopulations of cells. Compared to previous works, our results provide for the first time a detailed quantitative account of how multiple, heterogeneous parameters, linked to both visual and motor domain, are encoded in V6A at single cell level and in a single task, offering an important contribution in understanding the integration of different inputs within the posterior parietal cortex.