Purchase this article with an account.
A. Nicole Winter, Charles Wright, Charles Chubb, George Sperling; Conjunctive targets are better than or equal to both constituent feature targets in the centroid paradigm. Journal of Vision 2017;17(10):54. doi: https://doi.org/10.1167/17.10.54.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In the centroid paradigm (Sun, Chubb, Wright, & Sperling, 2015), a method for studying feature-based attention, participants view a brief display of items and then estimate the centroid, or center of mass, of the target items while ignoring the distractors. In our previous work (Winter, Wright, Chubb, & Sperling, 2016), we found performance on conjunctive target conditions was better than feature target conditions for one constituent feature dimension and worse for the other. In this study, we find performance on conjunctive target conditions is better than or equal to performance on both constituent feature target conditions. Methods: Targets were defined by luminance (the darkest items), shape (the most circular items), or their conjunction (the darkest and most circular items). Each stimulus display contained items that varied over two levels of each feature dimension. These two levels were chosen to be either more or less similar, resulting in four display types that were intermixed throughout the three blocked target conditions. Results: As expected, performance in all three target conditions was better when the stimuli differed more on the relevant dimension(s). When both the feature dimensions were sufficiently different, performance on the conjunction task was better than or equal to performance on both feature tasks. Conclusion: Given the visual search literature, it is perhaps surprising that participants can estimate the centroids of conjunctive targets at all, let alone better than they can constituent feature targets. The current findings suggest that conjunctive centroid judgments do not incur any cost to performance; rather, it seems they offer a performance advantage when the levels of both feature dimensions are sufficiently different.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only