Purchase this article with an account.
Suniyya A. Waraich, Jonathan D. Victor; Mapping perceptual spaces of objects and low-level features. Journal of Vision 2021;21(9):1941. doi: https://doi.org/10.1167/jov.21.9.1941.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The transformation of visual signals from elementary features into semantic content entails a qualitative change in the nature of the information that is represented. Elementary features (e.g., color or texture) form continua, while semantic representations (e.g., objects) are often categorical. To probe this process, we developed and implemented a psychophysical paradigm to characterize the geometry of visual representations at several stages of the transformation. We hypothesize that representing semantic information requires a different geometry from representing low-level features. In parallel experiments, stimuli were drawn from three domains: (1) the names of 37 familiar animals (from WordNet), (2) texturized but recognizable images of these animals, and (3) fully scrambled textures. In each experiment, subjects viewed displays consisting of one central reference stimulus with 8 surrounding stimuli and were asked to rank the surrounding stimuli in order of similarity to the central reference. This rank-order design yielded 5994 unique choice probabilities, and included trials to check for context-dependence, i.e., judgments of “Is A (the reference stimulus) more similar to B or to C?” in the presence of different surrounding stimuli. We found that choice probabilities were consistent across subjects and contexts (n=3 for (1), n=2 for (2), n=2 for (3)). We used multidimensional scaling to test Euclidean geometric models that could account for the similarity judgments. For all domains, the minimum number of dimensions needed to describe the perceptual space was greater than four. Although models of all the representations were similarly high-dimensional, preliminary results suggested that their geometric characteristics differed. Specifically, points in the semantic space (animal names) formed tight clusters and mostly were near the periphery, whereas in the two texture domains, the points were more evenly distributed throughout the space. These results suggest a qualitative difference between the geometry of representations of semantic information versus low-level features.
This PDF is available to Subscribers Only