Abstract
Object file studies using object reviewing paradigm revealed that object features can be accessed through addressing their spatiotemporal locations. In the divided attention literature, evidence for coactivation of different features has been reported using redundant signals paradigm. These two lines of research lead to a question whether object files play significant roles in integrating features, which remains unsolved because evaluation of feature coactivation requires response time (RT) distribution analysis, never done with object reviewing paradigm. The current study conducted RT distribution analyses with a new task combining object reviewing and redundant signals paradigms. Observers saw a preview display composed of two colored boxes containing a letter, above and below the fixation, followed by a linking display. Then, they saw a target display with a single object, and judged whether the target contains color or shape of preview objects as quickly as possible. For match trials, features are either at the same-object (SO) or different-object (DO) as in the reviewing paradigm. Type of match was color, shape, or color-and-shape (object) as in the redundant signal paradigm. In the object condition, the mixed condition combining one SO and one DO features was also included. Mean RT revealed significant redundancy gain in both SO and DO conditions, and object-specific preview benefit in all match type conditions except for the color condition. Race model inequality test using RT distribution showed evidence for feature coactivation in SO and DO conditions with similar magnitudes, not supporting the idea that feature coactivation is modulated by access to an object file. Further analysis with ex-Gaussian distribution for color-shape conditions revealed that faster and slower RT components were modulated by match of single feature, and of feature combination, respectively, suggesting that object file reviewing is feature-based, but response selection is sensitive to feature combinations.
Supported by MEXT Grant-in-Aid #21300103, and Global COE D07 to Kyoto University.