August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
The temporal resolution of binding brightness and loudness in dynamic random sequences
Author Affiliations
  • Daniel Mann
    Department of Cognitive Sciences, School of Social Sciences, University of California, Irvine
  • Charles Chubb
    Department of Cognitive Sciences, School of Social Sciences, University of California, Irvine
Journal of Vision August 2012, Vol.12, 612. doi:https://doi.org/10.1167/12.9.612
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Mann, Charles Chubb; The temporal resolution of binding brightness and loudness in dynamic random sequences. Journal of Vision 2012;12(9):612. https://doi.org/10.1167/12.9.612.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose. This study investigated the ways in which observers can combine dynamic visual and auditory information. Method. The stimuli were composed of gray disks accompanied by simultaneous bursts of auditory white noise. Three levels of disk brightness and of noise loudness were used to produce 9 different types of audiovisual tokens. On a given trial, 18 tokens (83ms per token) drawn from these 9 token types were presented in random order. Different conditions required participants to try to classify stimulus sequences using various target filters that gave differential weight to the 9 audiovisual token types. In each condition, a probit model was used to measure the attention filter achieved by the participant (the impact exerted on the observer’s judgments by each of the 9 token-types). The model also included terms reflecting potential non-simultaneous "misbindings" across time of auditory and visual components. Results. Participants demonstrated a high degree of strategic flexibility in achieving attention filters that varied widely across tasks, but these filters often deviated strongly from the corresponding target filters. Model fits revealed that misbindings of auditory and visual components that were 83 ms apart influenced judgments with half the strength of simultaneous components. Components that were 167 ms apart did not misbind. Conclusions. The temporal resolution of the binding achieved with these stimuli is higher than is found in tasks requiring the observer to judge the phase with which an alternating pair of visual stimuli matches up with an alternating pair of auditory stimuli (Fujisaki & Nishida, 2010). In correlating loudness and brightness, the space of attention filters that participants are able to achieve is at least three dimensional. Citation: Fujisaki, W & Nishida, S (2010). A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities, Proceedings of the Royal Society of London B: Biological Sciences,277(1692):2281-90.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×