Abstract
Intermediate areas of the object pathway appear to represent shape in terms of features of moderate complexity, however the precise nature of this distributed code remains unclear. Here we use a novel method to evaluate the efficiency with which three candidate representations (Fourier Descriptors, Shapelets and Formlets) capture the planar shape information required for humans to reliably recognize objects. The Fourier Descriptor representation is the Fourier transform of the points defining the object boundary, represented as complex numbers; a good approximation to a shape is attained by truncating this Fourier sequence. Shapelets are a wavelet version of Fourier Descriptors, where each component is localized in both frequency and position along the curve, and these are computed by matching pursuit. Formlets represent shape as a series of smooth localized deformations applied to an embryonic shape (an ellipse in our case), also computed using matching pursuit. We employed a database of 77 animal shapes from 11 categories. In objective terms (Euclidean error), these shapes are most efficiently coded by Shapelets, followed by Fourier Descriptors, and finally Formlets. To evaluate subjective efficiency, shapes were rendered using each of these three representations; the observer’s task was to identify the category of each shape from four alternatives. For each representation, the number of shape components ranged from 1 to 10; a representation that reaches subjective threshold with fewer components may be closer to the code employed by the human visual system. For all 6 observers, Shapelets were found to have lowest threshold (mean of 1.8±0.4 components), followed by Fourier Descriptors (4.0±0.4 components), and finally Formlets (5.5±0.6 components). Interestingly, however, both Shapelets and Formlets reach subjective thresholds at a higher mean objective error than Fourier Descriptors, suggesting that the human visual system relies upon localized basis functions for shape representation.
Meeting abstract presented at VSS 2015