Humans are good at detecting blur (Hamerly & Dvorak,
1981; Watt & Morgan,
1983), but it is unclear how they do it. One possibility is that edge blur is extracted as a byproduct of edge detection (Elder & Zucker,
1998; Georgeson, May, Freeman, & Hesse,
2007; Lindeberg,
1994,
1998; May & Georgeson,
2007a,
2007b; Watt & Morgan,
1985). Generally, it is assumed that edges are detected by oriented filters, much like the receptive fields of V1 simple cells, and these filters have a range of sizes, or scales, in order to capture the range of edge blurs that can occur. For example, in Marr's Primal Sketch (Marr & Hildreth,
1980) the image is initially analyzed with Laplacian-of-Gaussian filters of different sizes to yield zero crossings. These are then integrated into edges in V1, in which each edge is characterized by various properties, including its scale, or blur. The MIRAGE model (Watt & Morgan,
1983,
1985) also assumes that an image is analyzed by a bank of filters of different widths. Uniquely, however, the MIRAGE model combines these filter outputs into a single representation of the image in which scale is not explicitly represented; edge location and blur are then extracted by further simple calculations. The N
1 and N
3+ models (Georgeson et al.,
2007), based upon the theoretical work of Lindeberg (
1994,
1998), also assume that the image is analyzed by a bank of filters of different scales. In the N
1 and N
3+ models, the location and blur of an edge are found by looking for peaks in a scale-space representation of the image (Lindeberg,
1998; Witkin,
1983). Other models of edge detection (Elder & Zucker,
1998) also embody the idea that to detect edges, the image must be analyzed at different scales.