August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Visual landmark information is multiplexed with target information in the visual responses of prefrontal gaze centres.
Author Affiliations & Notes
  • Vishal Bharmauria
    Centre for Vision Research (CVR) and Vision: Science to Applications (VISTA), York University
  • Adrian Schütz
    Department of Neurophysics, Phillips Universität Marburg and Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen
  • Xiaogang Yan
    Centre for Vision Research (CVR) and Vision: Science to Applications (VISTA), York University
  • Hongying Wang
    Centre for Vision Research (CVR) and Vision: Science to Applications (VISTA), York University
  • Frank Bremmer
    Department of Neurophysics, Phillips Universität Marburg and Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen
  • John Douglas Crawford
    Centre for Vision Research (CVR) and Vision: Science to Applications (VISTA), York University
  • Footnotes
    Acknowledgements  Canadian Institutes for Health Research (CIHR); Vision: Science to Applications (VISTA) Program: Deutsche Forschungsgemeinschaft (DFG)
Journal of Vision August 2023, Vol.23, 5546. doi:https://doi.org/10.1167/jov.23.9.5546
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vishal Bharmauria, Adrian Schütz, Xiaogang Yan, Hongying Wang, Frank Bremmer, John Douglas Crawford; Visual landmark information is multiplexed with target information in the visual responses of prefrontal gaze centres.. Journal of Vision 2023;23(9):5546. https://doi.org/10.1167/jov.23.9.5546.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How does the visual system extract useful information from a rich environment for action? For example, while reaching for a coffee cup, the brain may use several allocentric visual cues (nearby book or computer) to effectively grasp it. These landmarks influence spatial cognition and goal-directed behavior, but how landmark-related visual coding subserves action is poorly understood. To this goal, with gaze system as a model, we recorded 101/312 frontal (FEF) and 43/256 supplementary (SEF) eye field visual responses (in two head-unrestrained monkeys) to the presentation of a target (T, 100 ms), in presence of a visual landmark (L, intersecting lines; presented in one of four diagonal directions/configurations from T). First, using a response field model fitting approach, we confirmed our previous findings (Bharmauria et al. 2020, 2021) that Te (Target relative to initial 3D eye orientation) was the best model for FEF and SEF visual responses at the population level, but some neurons (30% in FEF and 20% in SEF) preferentially coded for landmark. We then specifically tested two mathematical continua (of ten equal steps) to quantify the influence of L on visual response: 1) Target to landmark in eye coordinates (Te-Le) that directly tested the influence of L and Target-in-eye to Target-relative-to landmark (Te-TL) to test the multiplexed influence of T and L. Along both continua, we found a significant influence of the landmark relative to the shuffled control data in most neurons (suggesting both landmark coding and influence of landmark on target coding). Further, the same analysis on separate T-L configurations resulted in a significant shift toward landmark-centered target coding at the population level in FEF, suggesting an influence of landmark coding. These results show that visual landmark influences visual responses in the gaze system, potentially stabilizing future gaze in the presence of noisy 3D eye position signals.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×