Abstract
Background: Haijiang et al. [PNAS 2006] used a rotating Necker cube (a perceptually bistable stimulus) to show that inherently uninformative cues such as the cube's position and translation direction could bias perceived rotation direction during test trials, after being associated with a particular rotation direction during training trials. However, they did not observe any such cue recruitment for an auditory cue (a repeating two-tone sequence that started 600ms before the appearance of the cube).
Goal: We hypothesized that a more “plausible” auditory cue might be recruited. We therefore used auditory cues associated with rotating objects, audio synchronized with intermittent rotation, and a virtual sound location at the cube.
Methods: “Ratchet” and “camera-film winding” sounds were the auditory cues. On each trial, a stationary cube appeared, then the cube rotated to the sound, stopped, and rotated again. In one experimental condition, only the sound type (ratchet or camera-film winding) was contingent on rotation direction during training. The auditory cue's location was fixed; it was simulated to emanate from the cube. In the second condition, both the sound type and the location of the sound were contingent on rotation direction during training.
Results and Conclusion: Twelve trainees showed no cue recruitment in either of the two conditions, which suggests that the construction of visual appearance may be inherently less likely to be influenced by new auditory cues than by new visual cues. Cross-modal interactions are known (e.g. the McGurk effect [McGurk & MacDonald, Nature 1976] and the bounce/pass effect [Sekuler, Sekuler & Lau, Nature 1997]) so it seems likely that such learning could occur under appropriate circumstances that remain to be determined.
Grant support: Human Frontier Science Program, NSF BCS-0810944