Abstract
Background: Reading is constrained by inherent limitations of the visual system. For instance, you cannot comprehend this whole paragraph at once. Beyond the limitations of peripheral vision that require saccades to scan the page, additional capacity limits could impair simultaneous processing of even just two words visible during a single fixation. Such capacity limits would impair performance when observers attempt to recognize two words at once compared to just one, as previous research in our lab has found. Here, we applied dual- and single-task paradigms to investigate the capacity limits for semantic recognition of multiple words compared to judging their lower-level features. Methods: Words appeared in rapid serial visual presentation simultaneously to the left and right of fixation. In dual-task conditions, observers made independent judgments regarding the presence of target words in both locations. In single-task conditions, observers focused their attention to judge words on only one side. Simple feature tasks required the detection of increments in luminance or color saturation of the words. Semantic categorization required deeper processing of the same stimuli, for instance detecting an animal word among other nouns. For each type of task, we quantified the dual-task deficit and compared it to the predictions of three signal detection models: an unlimited-capacity parallel model, a fixed-capacity parallel model, and the standard serial model (one word at a time). Results: Simple feature tasks showed no dual-task deficits (no divided attention effects), consistent with previous results. Semantic categorization of the same stimuli suffered reliable deficits that were consistent with the fixed-capacity parallel model. We conclude that words need not be processed in serial, and indeed simple features of two words can be processed in parallel with no cost. Higher-level linguistic processing, however, is capacity-limited such that recognition is best when processing just one word at a time.
Meeting abstract presented at VSS 2016