Purchase this article with an account.
Jedediah Singer, Joseph Madsen, Gabriel Kreiman; Visual integration in the human brain. Journal of Vision 2011;11(11):887. doi: https://doi.org/10.1167/11.11.887.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We live in a dynamic world, and seeing is a continuous process. New information about objects becomes available through saccades, with motion relative to intervening obstructions, as our eyes focus and adapt, and as environmental conditions change. Yet it is not the case that each instant brings us a fresh vista that must be parsed anew – we have “buffering” mechanisms for storing visual information about what was just seen, so that new information can be added to it and modulate it. We investigated the time window spanned by this putative buffer, as well as where in the brain the cells instantiating it might be located. We recorded intracranial field potentials from tens of subdural grid electrodes implanted in each of fifteen epilepsy patients while they performed a recognition task. A stream of rapidly changing visual noise (10–30 Hz) contained embedded within it an image, an image fragment, or two complementary asynchronous image fragments. Images were grayscale renderings of natural objects chosen to elicit varied electrophysiological responses. In each trial, the subject's task was to indicate whether the embedded image(s) matched a test image presented at the end. Behaviorally, performance was maximal when whole images or complimentary pairs with short asynchronies were shown. As asynchrony increased, performance dropped, and isolated image halves were the most difficult. We were able to discriminate between whole-image presentations and isolated image fragments using the field potential signals recorded at many electrodes in visual cortical areas. Responses to fragment pairs at short (within 100 ms) asynchronies resembled responses to whole images; at longer asynchronies, fragments elicited independent responses. These results show that occipito-temporal visual areas of the human brain can combine information presented across a span of at least 100 ms when representing the visual world. These observations impose quantitative constraints on computational models of continuous visual perception.
This PDF is available to Subscribers Only