September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Time (The ‘Audiovisual Rulez’ Remix)
Author Affiliations & Notes
  • Simon J Cropper
    School of Psychological Sciences, University of Melbourne.
  • Liheng W Xu
    School of Psychological Sciences, University of Melbourne.
  • Aurelio M Bruno
    School of Psychology, University of York, UK.
  • Alan Johnston
    School of Psychology, University of Nottingham, UK.
Journal of Vision September 2019, Vol.19, 163b. doi:https://doi.org/10.1167/19.10.163b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Simon J Cropper, Liheng W Xu, Aurelio M Bruno, Alan Johnston; Time (The ‘Audiovisual Rulez’ Remix). Journal of Vision 2019;19(10):163b. https://doi.org/10.1167/19.10.163b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We are interested in how we perceive time and how we accumulate and use our internal representation of a temporal interval to improve that percept. The work described here continues to examine the perception of short periods of time in visual, auditory and audiovisual stimuli and the subjects’ ongoing knowledge of their own performance over repeated trials. Subjects were presented with 2 intervals containing a stimulus of the same duration (1500ms or 3000ms). The stimuli were visual gratings or auditory tones or a combination of the two. Subjects initiated presentation of each interval with a button-press and released the button when they considered the stimulus to be half-way through; they then indicated their ‘best estimate’ of the pair. Each subject (n=8) performed 500 trails in the same order for 6 different stimulus conditions. Data was analysed in terms of first/second interval; ‘best’/‘worst’ Actual Observer estimate; ‘best’/‘worst’Ideal Observer estimate. From this we were able to judge both the subject’s performance on the task and their insight into their own decisional ‘noise’ within an Ideal Observer framework. In both sub- and supra-second conditions, audiovisual cues gave an estimation closer to the veridical bisection-point for all subjects compared to the single modality condition. This cannot be explained by simple probability summation across two cue-sources. There was no evidence for a scalar effect of duration in any condition and metacognition of performance was consistently good across conditions. Bayesian statistical modelling strongly supported optimal integration as an explanation of the data. Taken together (VSS 2017/18/19) our data suggest that subjects integrate effectively across modalities to generate an internal estimate of time close to, but subjectively different from, the actual time to be judged. This interval is learned rapidly but constantly updated throughout the observation period and is best explained within a Bayesian framework.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×