August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Crossmodal interaction in metacontrast masking
Author Affiliations
  • Su-Ling Yeh
    Department of Psychology, National Taiwan University
  • Yi-Lin Chen
    Department of Psychology, National Taiwan University
Journal of Vision August 2010, Vol.10, 894. doi:10.1167/10.7.894
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Su-Ling Yeh, Yi-Lin Chen; Crossmodal interaction in metacontrast masking. Journal of Vision 2010;10(7):894. doi: 10.1167/10.7.894.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Metacontrast masking (MM) refers to the phenomenon of reduced target visibility due to a temporally lagging and spatially non-overlapping mask, and it has been attributed to inhibition between low-level visual channels such that transient activity triggered by the onset of the delayed mask inhibits sustained activity regarding the contours of the preceding target. Theories of MM have considered it to occur exclusively in the visual domain, without concerning signals from other modalities, such as audition. The current study aims to explore the possible effects of sound on MM by using a contour discrimination task and measuring the perceptual sensitivity change (d′) of the visual target with or without a sound. In Experiment 1, the sound was presented at different points in time with respect to the target. The results showed that the visibility of the masked target was elevated when sound was presented before the target. Accordingly, in Experiment 2, we adopted a spatial cueing paradigm in which the spatial congruency of the sound and target was manipulated. In Experiment 3, the target-sound SOA was manipulated further to probe the temporal window of the effect of sound on MM. An equivalent visual cue also was used for comparison in Experiments 2 and 3 to examine whether within- or cross-modal spatial cues would shift attention to its location in the standard MM task. The results showed that MM was affected by sound at the time when masking was reduced in the period of recovery from maximal masking SOA, indicating that sound enhanced target visibility in MM by orienting attention to its location, probably through a feedback modulation, to sustain the object representation of the visual target. This study sets a new example of audio-visual interaction for a phenomenon classically considered to be visual only.

Yeh, S.-L. Chen, Y.-L. (2010). Crossmodal interaction in metacontrast masking [Abstract]. Journal of Vision, 10(7):894, 894a, http://www.journalofvision.org/content/10/7/894, doi:10.1167/10.7.894. [CrossRef]
Footnotes
 This research was supported by the National Science Council in Taiwan (NSC 96-2413-H-002-009-MY3 and 98-2410-H-002-023-MY3).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×