December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Representation and integration of allocentric and egocentric visual information for goal-directed movements: A convolutional / multilayer perceptron network approach
Author Affiliations & Notes
  • Parisa Khoozani
    York University
  • Vishal Bharmauria
  • Adrian Schütz
  • Richard P. Wildes
  • J. Douglas Crawford
  • Footnotes
    Acknowledgements  Supported by a VISTA Program fellowship.
Journal of Vision December 2022, Vol.22, 3549. doi:https://doi.org/10.1167/jov.22.14.3549
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Parisa Khoozani, Vishal Bharmauria, Adrian Schütz, Richard P. Wildes, J. Douglas Crawford; Representation and integration of allocentric and egocentric visual information for goal-directed movements: A convolutional / multilayer perceptron network approach. Journal of Vision 2022;22(14):3549. https://doi.org/10.1167/jov.22.14.3549.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Allocentric (landmark-centered) and egocentric (eye-centered) visual information are optimally integrated for goal-directed movements. This process has been observed within the supplementary and frontal eye fields, but the underlying processes for this combination remain a puzzle, mainly due to inadequacy of current theoretical models to explain data at different levels (i.e., behavior, single neuron, and distributed network). The purpose of this study was to create and validate a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we modelled the visual system as a Convolutional Neural Network (CNN) and the sensorimotor transformation as a Multilayer Perceptron (MLP). The network was trained on a task where a landmark shifted relative to the saccade target. These parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. The network was trained on both idealized and actual monkey gaze behavior. Decoded saccade output replicated idealized training sets with various allocentric weightings, and actual monkey data (Bharmauria et al. Cerebral Cortex 2020) where the landmark shift had a partial influence (R2 = 0.80). Furthermore, MLP output units accurately simulated motor response field shifts recorded from monkeys (including open-ended response fields that shifted partially with the landmark) during the same paradigm. These results suggest that our framework works and provides a suitable tool to study the underlying mechanisms of allocentric-egocentric integration and other complex visuomotor behaviors.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×