Within each feature dimension, accurate recognition can be severely limited by
crowding, a disruptive interaction among adjacent objects that are otherwise visible in isolation (Bouma,
1970; Flom, Weymouth, & Kahneman,
1963). Crowding occurs when clutter falls within a region of space surrounding the target object, known as the
interference zone, which increases in size with retinal eccentricity (Bouma,
1970; Toet & Levi,
1992). Errors made under crowded conditions correlate strongly with the features present within flanking objects (Dakin, Cass, Greenwood, & Bex,
2010; Huckauf & Heller,
2002; Strasburger, Harvey, & Rentschler,
1991), likely because crowded target features change to more closely resemble those of the flankers (Greenwood, Bex, & Dakin,
2010). A range of theories has been proposed to account for these effects (reviewed by Levi,
2008), though a weighted averaging process that combines target and flanker features has been arguably the most successful, with clear application for the crowding of orientation (Parkes, Lund, Angelucci, Solomon, & Morgan,
2001) and position (Dakin et al.,
2010; Greenwood, Bex, & Dakin,
2009). The net effect is that the visual scene becomes simplified toward texture (Freeman & Simoncelli,
2011).