Abstract
Background: While many studies have used adaptation to probe the neural representation of faces, few have used this to examine how words are represented in the human visual system. Last year we established that word aftereffects exist and are invariant for script style (Hanif et al, J Vis 2012: 12(9): 1060). Objective: Our goals were, first, to use adaptation to examine the contribution of components of words to the word aftereffect and second, to determine if there was cross-modal transfer of aftereffects. Methods: 30 subjects participated in Experiment 1. Two pairs of compound words of equal length were chosen as base stimuli, in upper case Arial font. Ambiguous probe stimuli were created by merging different degrees of transparencies of the pairs together in an overlay, with added Gaussian noise. The 5-second adapting stimuli were either the original words, words with the component morphemes re-arranged, or a rearrangement of the original words’ letters into a meaningless string. 12 subjects participated in Experiment 2. Two pairs of words were chosen. Probe stimuli were generated by either the same method, or a morphing procedure, for comparison. In the visual condition, the original words were presented as 5-second adapting stimuli, while in the auditory condition, the adaptor was a 4.8-second tape of different individuals saying the original word every 800ms. Results: Experiment 1 generated a 17% aftereffect for whole words, while the re-arranged morphemes generated a small 5% aftereffect, and letter strings generated no aftereffect. Experiment 2 generated a 10% aftereffect for whole visual words, irrespective of probe type, but no aftereffect from auditory words. Conclusion: Visual words have a strong representation at the whole-word level, and a minor grapheme component. As found previously for face expression and age aftereffects, there was no cross-modal transfer from the auditory sense.
Meeting abstract presented at VSS 2013