Purchase this article with an account.
Christopher DiMattina, Curtis Baker; How texture elements are combined to detect boundaries: A machine learning approach. Journal of Vision 2018;18(10):795. https://doi.org/10.1167/18.10.795.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Natural boundaries are defined not only by differences in first-order cues such as luminance and color, but also by second-order cues such as contrast and texture. However it remains poorly understood how second-order boundaries are represented in the visual system. Here we introduce a machine learning approach to modeling psychophysical performance with second-order boundaries. We address how texture information is integrated across space, and across multiple orientations of texture elements. Pairs of Gabor micropattern textures having differing texture contrasts were quilted to create texture-boundary stimuli. Subjects performed a 2AFC task to distinguish whether texture contrast boundaries were left- vs right-oblique, using a method of constant stimuli to measure (1) threshold modulation depth, for +/- 45 degree boundary orientations, and (2) orientation discrimination for near-horizontal boundaries. Our model consisted of an initial array of fine-scale nonlinear subunits (V1-like filters) whose responses are combined as weighted sums (second-stage filters) which compete to produce the alternative responses of each 2AFC trial. Machine learning methods were used to fit the weights to trial-wise psychophysical data. The fitted models could accurately predict human performance on novel stimulus sets not used for parameter fitting. The estimated second-stage filters indicate that subjects utilize information throughout the entire extent of the stimuli in the modulation depth task, but only use texture elements near the boundary for orientation discrimination - in both cases, like an ideal observer. In additional experiments with carrier textures containing multiple orientations, we observed better fits to data with an "orientation-opponent" model where second-stage filters integrate across multiple first-order orientation channels, than with a "non-opponent" model where second-stage filters only analyze a single orientation channel. This work demonstrates the potential of machine learning methods for modeling second-order boundary perception, an important visual task which cannot be characterized using standard linear modeling techniques.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only