Purchase this article with an account.
Jonathan W. Peirce; Understanding mid-level representations in visual processing. Journal of Vision 2015;15(7):5. doi: https://doi.org/10.1167/15.7.5.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
It is clear that early visual processing provides an image-based representation of the visual scene: Neurons in Striate cortex (V1) encode nothing about the meaning of a scene, but they do provide a great deal of information about the image features within it. The mechanisms of these “low-level” visual processes are relatively well understood. We can construct plausible models for how neurons, up to and including those in V1, build their representations from preceding inputs down to the level of photoreceptors. It is also clear that at some point we have a semantic, “high-level” representation of the visual scene because we can describe verbally the objects that we are viewing and their meaning to us. A huge number of studies are examining these “high-level” visual processes each year. Less well studied are the processes of “mid-level” vision, which presumably provide the bridge between these “low-level” representations of edges, colors, and lights and the “high-level” semantic representations of objects, faces, and scenes. This article and the special issue of papers in which it is published consider the nature of “mid-level” visual processing and some of the reasons why we might not have made as much progress in this domain as we would like.
This PDF is available to Subscribers Only