Abstract
During a brief glance people mainly grasp the main category, or the “gist” of a scene. Is categorical scene information registered in the absence of attention? Previous studies investigating this question have typically involved explicit scene detection, or an explicit report of a scene’s gist. A possible limitation of such explicit methods is that scenes are prioritized by task demands (e.g., in dual tasks), and/or the assessment of scene processing is prone to working memory capacity limitations (e.g., in the inattentional blindness paradigm). To avoid these potential limitations, we examined scene categorization under conditions in which unattended processing was assessed implicitly (i.e., indirectly). Participants searched for a superordinate scene category (e.g., “nature”) among briefly presented pairs of colored scenes positioned below and above fixation. Within pairs containing scenes from non-searched categories (e.g., “urban”, “indoor”), items either belonged to same or to different categories. When both scenes were attended, RT for same-category pairs was significantly shorter than for different-category pairs, indicating that scene category was registered. Critically, when participants were cued to respond to one of two scenes in a pair, while its counterpart scene served as an irrelevant distractor positioned outside the main focus of attention, the categorical effect was eliminated. These findings suggest that the unattended scene category was not automatically categorized. An irrelevant (unattended) scene affected behavior only if it served as a to-be-detected target, or if it appeared as a background for a central scene. Similar findings were obtained with achromatic scene images. Collectively, our findings suggest that when focusing on a task-relevant scene flanked by an irrelevant (distractor) scene, the latter’s gist is not necessarily registered. However, if one focuses on a central stimulus embedded within a background scene, categorical information concerning the surrounding environment may be extracted rapidly and automatically, even with little attention capacity.
Meeting abstract presented at VSS 2015