Purchase this article with an account.
Imri Sofer, Kwang Ryeol Lee, Pachaya Sailamul, Sébastien Crouzet, Thomas Serre; Understanding the nature of the visual representations underlying rapid categorization tasks.. Journal of Vision 2013;13(9):658. doi: https://doi.org/10.1167/13.9.658.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Unlike simple artificial visual stimuli that can be readily parameterized, one of the key challenges associated with the use of natural image datasets in psychophysics is that they are difficult to control. For instance, we do not yet have metrics for natural images that capture the intrinsic visual similarity between real-world object categories. It is thus hard to control for task difficulty when dealing with natural object categories, which might be a cause for confounds in existing experiments. Here we introduced a measurement of the discriminability between natural scene categories based on visual features. The discriminability measurement is obtained by first extracting image features from existing computational models of visual processing from low-, to mid- and high-level features. A classifier is then trained to discriminate between pairs of categories and an error rate is then computed on an independent dataset. This procedure was performed on 10 categories of four-legged mammals (dog, bear, gorilla, zebra, etc), and tested on participants performing 2AFC rapid categorization tasks, using all possible category pairs (average performance=83%, SD=7.9, n=4). Surprisingly, we found that low-level visual features are already highly correlated with the pattern of performance (both accuracy and reaction time) from individual participants (r>0.80, p<0.001). Our results suggest that visual similarities based on low-level visual features may account for various effects observed in visual categorization experiments, which are typically attributed to semantic aspects of visual processing. Overall, these types of visual similarity measurements might offer a unique tool to control image sets in visual recognition experiments, and further improve our understanding of the underlying visual processes.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only