Abstract
Perceptual expertise is described as the improved ability to recognize, identify, and discriminate between items within a domain of expertise. Past work on perceptual expertise suggests that subordinate-level training, but not basic-level training or exposure, leads to increased discrimination of birds and cars (Scott et al., 2006; 2008). However, it is unclear whether improvements in discrimination associated with increased perceptual expertise are accompanied by changes in visual strategies. Adults (n= 28) were trained to discriminate between "species" of novel computer-generated objects (Figure 1). Stimuli included two separate families of objects, each with 10 unique species. Within subjects, participants were trained (9 hours of training across a 2-3 week period) to discriminate one family at the subordinate level and the other at the basic level. Eye-tracking and accuracy (d') during a serial image matching task were assessed pre- and post-training. The ScanMatch Matlab Toolbox (Cristino et al., 2010) was used to further examine visual fixations by placing a grid over the image and coding the temporal and spatial sequences of fixations. Similarity scores were calculated within particpants for each condition and at pre- and post-test. Consistent with past perceptual expertise training studies (Scott et al., 2006; 2008), there was an increase in accuracy for the serial image matching task from pre-test to post-test, for the subordinate (p < .001), but not the basic trained family. For eye-tracking, there was no change in dwell time from pre- to post-test or between basic or subordinate level training. Scan path analyses suggest that consistent fixation patterns emerge within participants after subordinate-level training (p < .001), but not basic-level training (Figure 2). These results indicate that, unlike overall dwell time, changes in visual fixation patterns after subordinate-level training are consistent with increased discrimination and may be an important index of perceptual expertise.
Meeting abstract presented at VSS 2018