Abstract
Many social and visual research experiments have demonstrated behavioral effects based on semantic category differences, even when presented subliminally. For example, threatening faces reach awareness faster compared to neutral faces and naked human bodies attract attention when they match your sexual preference. Overall, images from categories that are relevant to an observer reach awareness faster compared to irrelevant images. However, a direct comparison of the processing of visual images from different semantic categories is complicated by the inherent differences in low-level image properties. Thus the question remains: is the time an image requires to reach awareness determined by the semantic category and its relevance or by low-level image properties? Here, we used a set of 400 pseudo-randomly selected images (from Google images) divided into four semantic categories (food, animals, art and naked human bodies) to test if access to awareness differs between categories when low-level image properties are taken into account. We used a breaking-continuous flash suppression paradigm to measure the amount of time an image takes to reach the observers awareness. Next, we extracted multiple indices of color and spatial frequency information from each images. Using a mixed-effects analysis we show that after taking image statistics into account, naked human bodies show no categorical effect on access to awareness. However, images of animals still result in deviating access to awareness rate compared to all other categories. Taken together, we show that most of the variance in access to awareness is in fact due to differences in low-level image properties. In particular, we find that differences in the spatial frequency content of the target image and that of the interocular mask strongly predict the variance in access to awareness. Our result demonstrate the importance of taking image statistics into account before comparing semantic categories.
Meeting abstract presented at VSS 2017