Abstract
Existing research demonstrates the important role of global scene properties in early visual processing. However, it is unknown how global properties interact during the processing. In the current study, we examine categorization performance on two global property scales, "natural – manmade" and "open – closed." To balance the extent to which scenes exemplified characteristics across the scales, we ran an experiment on Amazon Mechanical Turk (AMT) to estimate latent rankings for each scale. In this experiment, a participant was shown two randomly selected photographs drawn from a subset of the Scene Understanding Database and asked to report which exemplified the characteristic better, e.g., "Which scene is more natural." These judgements were then used to estimate the rankings using the Bradley-Terry model. In our main experiment, subjects were required to categorize scene images in three different blocks: natural/manmade; open/closed; and "natural and open"/"manmade or closed". Participants in this experiment had relatively consistent choice probabilities with the AMT rankings when discriminating natural/manmade scene images. However, participants tended to categorize natural-closed images as open in the open/close block. Consistent with this judgement, participants frequently categorized natural-closed images as "natural and open" in the "natural and open" block. To examine whether people were more efficient at determining "natural" and "open" together than separately, we used a baseline independent, parallel, exhaustive processing model. To estimate the baseline, we used response times from the natural/manmade block and the open/closed block. All participants were more efficient than predicted by the baseline model in the "natural and open." Because "natural" and "open" are highly correlated in the ranking scales, subjects may only do "natural/manmade" discrimination when asked to categorize "natural and open" images. This resulted in their highly efficient performance in the "natural and open" block.
Meeting abstract presented at VSS 2017