Purchase this article with an account.
Mohammad Hovaidi Ardestani, Martin Giese; Biophysically plausible neural model for the interaction between visual and motor representations of action. Journal of Vision 2017;17(10):1167. doi: 10.1167/17.10.1167.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
INTRODUCTION: Action perception and action execution are intrinsically linked in the human brain. Experiments show that concurrent motor execution influences the visual perception of actions. This interaction is mediated by action-selective neurons in premotor and parietal cortex. We have developed a model based on biophysically realistic spiking neurons that accounts for such interactions. METHODS: Our model is based on neural representations of different motor actions by mutually coupled neural fields. One field model represents the perceived action (vision field), and the other one the associated motor program (motor field). They consist of coupled ensembles of Exponential Integrate-and-Fire neurons (Brette et al., 2005), and stabilize travelling local solutions (activity peaks), which either follow the stimulus pattern in the vision field, or propagate autonomously after a 'go-signal' in the motor field. Both fields are coupled by interaction kernels that stabilize solutions with synchronously propagating pulses in both fields. Representations for different actions inhibit each other. We used the model to reproduce the results of several experiments focusing on action-perception coupling and mirror neurons. RESULTS: Consistent with experimental data, this architecture provides a unifying account for spatial and temporal tuning of action-perception coupling (Christensen et al., 2011), and for the influence of action perception on variability of execution (Kilner et al., 2003). The model reproduces the behavior of the neural population vector trajectories of mirror neurons in premotor cortex (Caggiano et al., 2016). Duplication of the model architecture allows to reproduce the spontaneous synchronization of two observers that see each other executing periodic body movements (Schmidt et al., 1990). CONCLUSION: The proposed model reproduces, using a single parameter set, a variety of quite different experiments that address the interactions between action vision and action execution. Since the model uses physiologically plausible circuits it makes a variety of predictions at the single-cell level.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only