Purchase this article with an account.
Erik Blaser, Zsuzsa Kaldy, Kemarah Eddy, Marc Pomplun; Determining salience for complex objects. Journal of Vision 2005;5(8):1005. doi: https://doi.org/10.1167/5.8.1005.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose: Our overall objective is to determine how salience — the visual system's assessment of relative biological relevance, on which attention allocation is thought to be based — is computed for complex objects. In this initial experiment, we sought to determine whether the detectability of such objects carries the signature of object-based attention; that is, that multiple feature dimensions are processed independently, by default. Methods: A 20×20 ‘Vasarely’ array of Gabor patches was presented to observers, with an embedded 4×4 object. Background elements were governed by a 3D Gaussian (in feature-space; spanned by the dimensions of color, orientation, and spatial frequency), as were object elements. The object, however, was defined by virtue of having either a higher mean or variance along one, two or three dimensions. Results: Not surprisingly, detection rates increase monotonically with greater differences in the means of the object and background distributions, or with greater variance differences. As expected too, as means or variances are changed along two dimensions, detection thresholds drop; with 3D manipulations, thresholds are lowest. Critical though, is whether we reach expectations based on independent treatment of the dimensions (this would be shown if detection rates were 1 minus the probability of missing the object's, say, mean difference along one of the manipulated dimensions times the probability of missing it along the other). This pattern of results is exactly what we found, for both mean and variance increases, for all pairwise dimension manipulations. Conclusions: With respect to detection, the feature dimensions of statistically-defined, complex objects are treated independently; we feel this is a key piece of support for emerging object-based models of salience.
This PDF is available to Subscribers Only