Abstract
Extracting global properties of a scene is very fast and sometimes possible without intention under a distributed attention mode. A previous study showed that the mean size computation is also biased by a set of irrelevant items that should have been ignored for accurate performance (Oriet & Brand, 2013). The current study investigated whether ensemble information from multiple feature dimensions (orientation and color) can be represented simultaneously and separately for each feature regardless of task relevance. We used a paradigm that examines how feature information held in VWM alters subsequent perception of the corresponding feature dimension depending on the task relevance (Teng & Kravitz, 2019). Using a single stimulus with multiple features, they showed that the content of VWM influenced the discriminability of subtle stimulus differences only when the feature dimension for the perceptual task was relevant to the maintained feature in VWM. Instead of a single stimulus, we used a set of Gabors comprised of heterogeneous orientations and colors. Participants were asked to attend to only one feature dimension when seeing Gabors and remember the mean of the designated feature. During the maintenance, they performed a perceptual discrimination task on one feature dimension which was either same (i.e., task-relevant) or different (i.e., task-irrelevant) with the VWM task, resulting in four combinations of between-subject conditions. Results showed that, regardless of task-relevance, orientation ensemble information of the memory task display affected subsequent orientation discriminability while color ensemble information did not. However, after excluding the trials where the memory task display was perceived as having two distinct colors, we found a similar tendency for color ensemble information as in the orientation feature. These results suggest that statistical information from multiple feature dimensions can be simultaneously extracted and affect future perception.