Abstract
Although objects are usually defined by many different visual attributes (color, brightness, motion, or texture), these attributes are bound together to yield a unified and coherent perception. Two schemes are usually opposed to account for this phenomenon: One postulates that visual features are first analyzed by separate, independent populations of neurons within the first cortical visual areas, and that the different representations are recombined at a later stage to generate a coherent percept. The other holds that individual cells in the visual pathways can code for several stimulus dimensions simultaneously.
To test these hypotheses, we recorded the EEG of human subjects during an object detection task: The subjects had to detect a square embedded in an array of visual elements (the background). The square differed from the background either in color or in texture, and we compared the signals recorded in these two conditions. We recorded 32-channel scalp EEGs from 9 subjects to study whether changes in an object's visual attributes would be reflected in the spatial distribution of scalp potentials, a modification of the signals' temporal structure, or both. We used source localization algorithms and coherence analysis to analyze our data, and compared them to the subjects' behavioral results.
Our analysis of the spatial properties of the EEG responses has not revealed any significant difference between the two conditions. However, there does appear to be a difference in the latency of the evoked potential components. When equated as multiples of detection threshold, the potentials evoked by color-defined objects have a shorter latency than those defined by texture. Our results suggest that objects defined by either texture or color are processed within the same cortical areas, but that their perception is mediated by different neural circuits within these areas.
Supported by Swiss National Science Foundation grant 3100-056782