December 2009
Volume 9, Issue 14
Free
OSA Fall Vision Meeting Abstract  |   December 2009
Decoding eye position from responses in human visual cortex
Author Affiliations
  • Elisha Merriam
    New York University
Journal of Vision December 2009, Vol.9, 18. doi:10.1167/9.14.18
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elisha Merriam; Decoding eye position from responses in human visual cortex. Journal of Vision 2009;9(14):18. doi: 10.1167/9.14.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To estimate the positions of objects in the visual world, the brain must combine information about where on the retina a response was evoked with information about the direction of gaze. All early human visual areas contain information about the retinotopic location of stimuli, but little is known about their representation of eye position. Single unit studies in monkeys have demonstrated that static eye position modulates the gain of visual activity (gain fields), yielding responses that simultaneously reflect both retinotopic stimulus location and eye position. We tested the hypothesis that both eye position and retinotopic stimulus location are represented in human visual cortex.

We measured cortical responses using fMRI to the same retinal stimulation while systematically varying eye position (3T Siemens Allegra, 4-channel phased-array surface coil, 2×2×2mm, 25 slices perpendicular to calcarine sulcus). Subjects performed a demanding psychophysical task at fixation, with the eyes held at one of nine orbital positions (−10°, 0°, +10° horizontal and −6°, 0°, +6° vertical relative to screen center). Eye position was maintained during each 4 minute run and pseudo-randomized across runs (4 subjects, 16–20 runs each). The stimulus was a circular patch (3° radius) of random dots with a motion-defined wedge (90°) that rotated slowly (1 cycle/24 s) through the visual field about the current fixation point. At each cortical location, we computed the response amplitude and phase at the stimulus frequency to estimate the magnitude of the response and the preferred stimulus location, respectively. Visual areas (including V1, V2, V3, MT, hV4, V3A) were identified using similar methods but with larger (10°) stimuli.

Response amplitudes varied with eye position. We used a multi-voxel pattern analysis to read out eye position from the spatial pattern of response amplitudes. The classifier reliably discriminated eye position in all visual areas, taking advantage of a heterogeneous encoding of eye position across the cortical surface, with a slight ipsilateral bias (e.g., higher gain in left hemisphere for left eye positions). On the other hand, response phase did not vary with eye position in any visual area, and a classifier based on the spatial pattern of response phases failed to discriminate eye position.

Because response phase was invariant to eye position, visual areas represent stimuli in a fixed retinotopic reference frame. These retinotopic representations are modulated by gain fields; changes in response amplitude related to the direction of gaze. We conclude that population responses in human visual cortex encode both eye position and retinotopic stimulus location.

Merriam, E.(2009). Decoding eye position from responses in human visual cortex [Abstract]. Journal of Vision, 9( 14): 18, 18a, http://journalofvision.org/9/14/18/, doi:10.1167/9.14.18. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×