Abstract
Drawing is a powerful tool for communicating ideas — a few well-placed strokes can convey the identity of a face, object, or scene. How do people learn to produce recognizable drawings? In a previous study, we examined how drawing objects in an online game influenced how well these objects could be drawn later. Performance was quantified using a multi-way classifier trained on output from a high-performing, deep convolutional neural network model of ventral visual cortex. After training, participants produced drawings that were better recognized by the model, owing to differentiation between representations of objects in the model. Here we examine which aspects of the training procedure account for this improvement. One possibility is that repeated visual exposure to drawn images may be sufficient to improve performance. To test this, we recruited 593 naïve participants who were each uniquely matched to a participant in the original cohort. They repeated the same procedure of drawing before and after training, except that instead of drawing during training, they viewed the final sketch produced by their matched participant. They did not significantly improve, despite receiving identical visual exposure — if not enhanced exposure (they only viewed completed drawings). Another possibility is that the dynamic visual feedback generated while constructing drawings was responsible for the improvement. To test this, we repeated the experiment with a new cohort of 593 participants who watched video replays of each drawing being produced, stroke-by-stroke, by their matched participant. These participants did significantly improve on the drawing task, though to a lesser degree than the original cohort. Dynamic visual feedback did not fully reproduce the original results, however, as performance on related but unexposed objects was not reliably impaired. These studies suggest that observing the dynamic process of drawing construction plays a specific role in enhancing visual production skill.
Meeting abstract presented at VSS 2016