July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
The components and modality-specificity of word representations in the human visual system: an adaptation study
Author Affiliations
  • Diana Choi
    Department of Ophthalmology and Visual Sciences, University of British Columbia\nDepartment of Medicine (Neurology), University of British Columbia
  • Hashim Hanif
    Department of Ophthalmology and Visual Sciences, University of British Columbia\nDepartment of Medicine (Neurology), University of British Columbia
  • Charlotte Hills
    Department of Ophthalmology and Visual Sciences, University of British Columbia\nDepartment of Medicine (Neurology), University of British Columbia
  • Jason J S Barton
    Department of Ophthalmology and Visual Sciences, University of British Columbia\nDepartment of Medicine (Neurology), University of British Columbia
Journal of Vision July 2013, Vol.13, 1304. doi:https://doi.org/10.1167/13.9.1304
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Diana Choi, Hashim Hanif, Charlotte Hills, Jason J S Barton; The components and modality-specificity of word representations in the human visual system: an adaptation study. Journal of Vision 2013;13(9):1304. https://doi.org/10.1167/13.9.1304.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Background: While many studies have used adaptation to probe the neural representation of faces, few have used this to examine how words are represented in the human visual system. Last year we established that word aftereffects exist and are invariant for script style (Hanif et al, J Vis 2012: 12(9): 1060). Objective: Our goals were, first, to use adaptation to examine the contribution of components of words to the word aftereffect and second, to determine if there was cross-modal transfer of aftereffects. Methods: 30 subjects participated in Experiment 1. Two pairs of compound words of equal length were chosen as base stimuli, in upper case Arial font. Ambiguous probe stimuli were created by merging different degrees of transparencies of the pairs together in an overlay, with added Gaussian noise. The 5-second adapting stimuli were either the original words, words with the component morphemes re-arranged, or a rearrangement of the original words’ letters into a meaningless string. 12 subjects participated in Experiment 2. Two pairs of words were chosen. Probe stimuli were generated by either the same method, or a morphing procedure, for comparison. In the visual condition, the original words were presented as 5-second adapting stimuli, while in the auditory condition, the adaptor was a 4.8-second tape of different individuals saying the original word every 800ms. Results: Experiment 1 generated a 17% aftereffect for whole words, while the re-arranged morphemes generated a small 5% aftereffect, and letter strings generated no aftereffect. Experiment 2 generated a 10% aftereffect for whole visual words, irrespective of probe type, but no aftereffect from auditory words. Conclusion: Visual words have a strong representation at the whole-word level, and a minor grapheme component. As found previously for face expression and age aftereffects, there was no cross-modal transfer from the auditory sense.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×