September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Vision in the extreme-periphery (2): Concurrent auditory stimuli degrade visual detection
Author Affiliations & Notes
  • Takashi Suegami
    Yamaha Motor Corporation U.S.A.
    California Institute of Technology
  • Christopher C Berger
    California Institute of Technology
  • Daw-An Wu
    California Institute of Technology
  • Mark Changizi
    2 AI Lab
  • Shinsuke Shimojo
    California Institute of Technology
Journal of Vision September 2019, Vol.19, 19c. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Takashi Suegami, Christopher C Berger, Daw-An Wu, Mark Changizi, Shinsuke Shimojo; Vision in the extreme-periphery (2): Concurrent auditory stimuli degrade visual detection. Journal of Vision 2019;19(10):19c.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Although peripheral vision (20°–40° eccentricity) has been characterized as categorically different from foveal vision in both spatial and temporal resolution, research on extreme-peripheral vision (> 40°) has been neglected. Previous work investigating the cross-modal influence on visual perception suggests that concurrent auditory stimuli can facilitate visual detection in the fovea and periphery (e.g., Frassinetti et al., 2002). However, visual perception in the extreme periphery is highly ambiguous, and therefore likely to be the most susceptible to cross-modal modulation. Thus, we hypothesized that the brain compensates for visual ambiguity in the extreme periphery by utilizing inferences made from unambiguous cross-modal signals to either facilitate or inhibit visual perception (i.e., a ‘compensation hypothesis’). To test this hypothesis, we conducted a psychophysics experiment that examined the effect of auditory stimuli on visual detection in the extreme-periphery. A white-dot (2° diameter) presented in either the left or right extreme-periphery (50ms) served as the target for the visual detection task (central fixation). The target location was set to each participant’s detection threshold (50% accuracy; M = 96.4°). Participants performed the visual detection task while one of four different types of auditory stimuli (white noise, brown noise, beep, or, no-sound control) was presented (50ms) concurrently from a speaker co-located with the target. The target location and type of auditory stimulus was fixed in each session (total = 8), and the order was counter-balanced across participants. Contrary to the facilitation-based compensation hypothesis, the results (n = 16) showed that visual detection (i.e., d’) was best when no sound was presented, and significantly decreased when a beep-sound was presented (p = .034); a result likely driven by an increased false-alarm rate in the beep-sound condition (p = .053). These results suggest that simultaneous cross-modal stimuli may suppress, rather than facilitate, ambiguous visual input in the extreme-periphery.

Acknowledgement: Yamaha Motor Corporation U.S.A. 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.