September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Visual integration in the human brain
Author Affiliations
  • Jedediah Singer
    Children's Hospital, Boston, USA
    Harvard Medical School, USA
  • Joseph Madsen
    Children's Hospital, Boston, USA
    Harvard Medical School, USA
  • Gabriel Kreiman
    Children's Hospital, Boston, USA
    Harvard Medical School, USA
    Swartz Center for Theoretical Neuroscience, Harvard University, USA
    Center for Brain Science, Harvard University, USA
Journal of Vision September 2011, Vol.11, 887. doi:https://doi.org/10.1167/11.11.887
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jedediah Singer, Joseph Madsen, Gabriel Kreiman; Visual integration in the human brain. Journal of Vision 2011;11(11):887. https://doi.org/10.1167/11.11.887.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We live in a dynamic world, and seeing is a continuous process. New information about objects becomes available through saccades, with motion relative to intervening obstructions, as our eyes focus and adapt, and as environmental conditions change. Yet it is not the case that each instant brings us a fresh vista that must be parsed anew – we have “buffering” mechanisms for storing visual information about what was just seen, so that new information can be added to it and modulate it. We investigated the time window spanned by this putative buffer, as well as where in the brain the cells instantiating it might be located. We recorded intracranial field potentials from tens of subdural grid electrodes implanted in each of fifteen epilepsy patients while they performed a recognition task. A stream of rapidly changing visual noise (10–30 Hz) contained embedded within it an image, an image fragment, or two complementary asynchronous image fragments. Images were grayscale renderings of natural objects chosen to elicit varied electrophysiological responses. In each trial, the subject's task was to indicate whether the embedded image(s) matched a test image presented at the end. Behaviorally, performance was maximal when whole images or complimentary pairs with short asynchronies were shown. As asynchrony increased, performance dropped, and isolated image halves were the most difficult. We were able to discriminate between whole-image presentations and isolated image fragments using the field potential signals recorded at many electrodes in visual cortical areas. Responses to fragment pairs at short (within 100 ms) asynchronies resembled responses to whole images; at longer asynchronies, fragments elicited independent responses. These results show that occipito-temporal visual areas of the human brain can combine information presented across a span of at least 100 ms when representing the visual world. These observations impose quantitative constraints on computational models of continuous visual perception.

NIH, NSF, Klingenstein Fund, Whitehall Foundation, Lions Foundation. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×