Abstract
How do early spatial filters in human vision serve to encode elementary features, such as bars and edges? Since visible features often correspond to peaks of luminance, gradient, or curvature in the smoothed intensity profile, Gaussian derivative (GD) filters up to order 3, denoted G0 to G3, at multiple scales, are a natural framework. Previous models have used even and odd filters, including GDs, either separately or in nonlinear combination, but none provides an adequate account of perceived features. I describe a scale-space model that extends our model of edge and blur coding (VSS 2001,2003) to encompass bars and edges. Candidate edges (of +ve or −ve polarity) are peaks in a pair of nonlinear G3 scale-space response maps (E+,E−) while candidate bars are peaks in a pair of G2 maps (B+,B−). Both filtering schemes are derived from general scale-space principles (Lindeberg, IJCV,1998). Crucially, features are output only if they are peaks in a composite response map: max[E+,E−, B+,B−]. Position, scale and intensity are returned for each feature. With only two free parameters, this model accounts well for edge and bar features seen on a wide variety of sharp or blurred 1-D luminance profiles, correctly predicts Mach Bands on ramp edges and sine-wave edges and correctly predicts no Mach Bands on Gaussian-blurred edges that are otherwise similar to the sine edge. It also correctly predicts a new phenomenon: ‘Mach Edges’, seen when the gradient profile (rather than the luminance profile) is a Mach ramp.