Abstract
Color-graphemic synesthetes experience "color" when viewing achromatic alphanumeric characters. Previous studies have suggested that synesthetic colors behave like real colors in tasks where color is important. For example, some studies reported faster or more accurate performance by synesthetes searching for an achromatic inducing target among other achromatic distractors relative to normal controls (Palmeri et al., 2002; Ramachandran & Hubbard, 2001; Smilek et al., 2001). However, other studies found that synesthetes enjoy no advantage over control subjects (Edquist et al., 2006; Gheri et al., 2008). In the present study, we employed a visual search paradigm while varying viewing conditions in consideration of the angular size of the search array as well as the angular subtense of each item. Three color-graphemic synesthetes (all associators) and matched controls participated in this study. Stimuli (e.g., 2 among 5s) of resolvable sizes were presented in array of three concentric circles (near: 1.99°, intermediate: 3.52°, and far: 6.49°). Conditions included set size (small: 12, medium: 18, and large: 24), color (real, synesthetic), and viewing (free, center-fixed). Observers made a speeded judgment detecting the target presence while their eye movement was monitored. Free Viewing: In the real color condition, synesthetes and controls didn’t show any difference in search performance. However in the synesthetic color condition, synesthetes were faster than controls searching the target with no loss of accuracy, supporting the perceptual reality of synesthetic color. Fixed Viewing: Overall, synesthetes were faster than controls searching the target. However synesthetes were not faster and less accurate than controls when the achromatic inducing target was located near fixation. The present results imply that synesthetes are better finding a "colored" target even without overt attention. However, performance enhancement by synesthetic color is evident when visual acuity falls off so that search becomes difficult solely based on the shape information.
Meeting abstract presented at VSS 2013