Abstract
Real world objects have a variety of features with different probability distributions. A tree leaf can have a unimodal hue distribution in summer that changes to a bimodal one in autumn. We have previously shown that perceptual systems can learn not only summary statistics (mean or variance), but also distribution shapes (probability density functions). To use such information observers need to relate it to spatial locations and other features. We investigated whether observers can do this during visual search. Ten observers looked for an odd-one-out line among 64 lines differing in orientation. Each observer participated in five conditions consisting of interleaved prime (5-7 trials) and test (2 trials) streaks. Distractors on prime streaks were randomly drawn from a mixture of two Gaussian distributions (10° SD) or a mixture of Gaussian and uniform (20° range) with means located ±20° from a random value. The target was oriented 60° to 90° away from the mean of the resulting bimodal distribution. During test streaks, both target and distractor mean changed with distractors randomly drawn from a single Gaussian distribution. In the spatially-bound condition, the two prime distributions were spatially separated with distractors from one distribution on the left, the rest on the right. In the feature-bound condition, distractors from one distribution were blue, the others yellow (target color was randomly yellow or blue). We analyzed RTs on test trials by distance in feature space relative to distractor distributions on prime streaks and target location or color. Separation of distributions by location and, to a lesser extent, by color, allowed observers to encode them separately. However, the properties of one distribution affected encoding of another. The results demonstrate the power and limitations of distribution encoding: observers can encode more than one distribution simultaneously, but each resulting representation is affected by other distributions.
Meeting abstract presented at VSS 2017