Abstract
Is there a hemispheric asymmetry in the implicit learning of new visual features? New visual features represent spatial structures based on perceptual grouping (a right hemisphere, RH, task), but they also lead to conceptual knowledge of the feature (a LH task). We contrasted the performance of 16 normal subjects with that of a split-brain patient. During practice, subjects viewed displays of seemingly random arrangements of simple shapes in a 3×3 grid, with each of the multi-shape scenes presented for 3 s in either the right or left visual fields. Unbeknownst to the subjects, the shapes were organized into 2 horizontal, 2 vertical and 2 oblique base-pairs. Each display was composed of three base-pairs in various grid positions (the elements of a base-pair always appeared together), and each of the base pairs could appear during practice only in the right or the left visual field while subjects maintained their fixation in the center of the screen. A 2AFC post-exposure test revealed that normal subjects could easily discriminate base-pairs from randomly combined shape-pairs when presented in either the left or right visual fields, regardless of where the pairs appeared during practice [71%, p < .0001]. In contrast, the performance of the split-brain patient was random when the test displays were presented in the right visual field (LH), but significantly above chance when presented in the left visual field (RH) [59% and 78%, respectively], with pairs appearing on the same side during practice and test. Performance in both visual fields was at chance with pairs appearing on the opposite sides during practice and test. These results suggest that the initial phase of statistical learning of new visual features is dominated by processing mechanisms in the RH, and they predict how the fMRI activation pattern in LH and RH might change as subjects' perception shifts from an initial phase of naïve observation to a knowledge-based interpretation of visual scenes.