Abstract
Recent findings have suggested the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel (JEP:HPP 2012, 429-438), evidence suggests that orientation cannot (APP 2013, 415-425). Here we investigated the capacity to consolidate multiple motion directions in parallel and re-examined this capacity using orientation. This was achieved by using a matching task to determine the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible. Additionally, we demonstrate the importance of adequate separation between intervals used to define items along a feature dimension when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. That is, performance is markedly poorer for parallel consolidation, compared to serial, when items are more similar, which is likely a result of low resolution representations being mistaken as neighboring items during the matching stage of the task. Finally, consistent with this interpretation, we showed that facilitation of spatial attention mitigates this affect, indicating that the deterioration of item resolution during parallel consolidation likely occurs during the encoding stage. Together, these results suggest that the variation in parallel consolidation efficiency observed for different features may, at least partially, be a result of the size of the feature perceptual space, i.e. the resolution at which features are processed relative to the size of their physical dimension.
Meeting abstract presented at VSS 2015