Purchase this article with an account.
Sarah Hancock, David P. McGovern, Jonathan W. Peirce; Ameliorating the combinatorial explosion with spatial frequency-matched combinations of V1 outputs. Journal of Vision 2010;10(8):7. doi: https://doi.org/10.1167/10.8.7.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Little is known about the way in which the outputs of early orientation-selective neurons are combined. One particular problem is that the number of possible combinations of these outputs greatly outweighs the number of processing units available to represent them. Here we consider two of the possible ways in which the visual system might reduce the impact of this problem. First, the visual system might ameliorate the problem by collapsing across some low-level feature coded by previous processing stages, such as spatial frequency. Second, the visual system may combine only a subset of available outputs, such as those with similar receptive field characteristics. Using plaid-selective contrast adaptation and the curvature aftereffect, we found no evidence for the former solution; both aftereffects were clearly tuned to the spatial frequency of the adaptor relative to the test probe. We did, however, find evidence for the latter with both aftereffects; when the components forming our compound stimuli were dissimilar in spatial frequency, the effects of adapting to them were substantially reduced. This has important implications for mid-level visual processing, both for the combinatorial explosion and for the selective “binding” of common features that are perceived as coming from a single visual object.
This PDF is available to Subscribers Only