Abstract
Humans are able to quickly and accurately recognize scenes from line drawings. This suggests that contour lines are sufficient to capture important structural information from a scene. Indeed, previous work from our lab has shown that viewing line drawings elicits similar neural activation patterns as viewing photographs, and that curvature and junction information is most helpful for human scene categorization. However, these results are based on line drawings made by one artist. In this study, we ask what contours and structural features are conserved across line drawings of scenes made by different people, and whether artistic expertise influences this consistency. We first developed software in Matlab Psychophysics Toolbox for tracing outlines over photographs of natural scenes (18 scenes, 6 categories) using a graphics tablet. Contours can be drawn free-hand or by creating a series of connected line segments. Spatial coordinates of the strokes are stored with temporal order information. Next we asked 43 participants with varying levels of artistic training to trace the contours in 5 photographs. We then extracted properties of contours (orientation, length, curvature) and contour junctions (types and angles) from each drawing. We found that people generally agree on some lines while differing on others. Specifically, contour curvature, orientation and junction types have the highest correlation between drawings of the same scene, across all scene categories, while contour length and junction angles are more variable between people. We are developing algorithms to determine matches between two individual drawings of the same image, and will present results showing which lines are most commonly agreed-upon, with characterization of the underlying physical phenomenon (occlusion boundaries, shadow, texture). In conclusion, our results measure the amount of agreement between individuals in drawing the contours in scenes. They highlight the importance of curvature and junctions for defining scene structure.
Meeting abstract presented at VSS 2016