Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Average Temperature from Visual Scene Ensembles Without Reliance on Color, Contrast or Low Spatial Frequencies
Author Affiliations
  • Vignash Tharmaratnam
    University of Toronto Scarborough
  • Dirk Bernhardt-Walther
    University of Toronto
  • Jonathan S. Cant
    University of Toronto Scarborough
Journal of Vision September 2024, Vol.24, 889. doi:https://doi.org/10.1167/jov.24.10.889
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vignash Tharmaratnam, Dirk Bernhardt-Walther, Jonathan S. Cant; Average Temperature from Visual Scene Ensembles Without Reliance on Color, Contrast or Low Spatial Frequencies. Journal of Vision 2024;24(10):889. https://doi.org/10.1167/jov.24.10.889.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Summary statistics for groups (i.e., ensembles) of faces or objects can be rapidly extracted to optimize visual processing, without reliance on visual working memory (VWM). We have previously demonstrated that this ability extends to complex groups of scenes. Namely, participants were able to extract average scene content and spatial boundary from scene ensembles. In the present study we tested whether this ability extends to scene features that are not solely attributable to visual processing and are instead computed cross-modally. Specifically, we examined ensemble processing of scene temperature. Given that the apparent temperature (i.e., how hot or cold a scene would feel) of single scenes is accurately rated by observers (Jung & Walther, 2021), we predicted that average scene temperature could be extracted by observers, without reliance on VWM. Crucially, across 4 experiments, we tested if this ability would depend on low-level visual features. Participants rated the average temperature of scene ensembles, with either colored stimuli (Exp 1), gray-scaled stimuli (Exp. 2), gray-scaled stimuli with a 75% contrast reduction (Exp. 3), or gray-scaled high spatial frequency filtered stimuli (> 6 cycles/degree, Exp. 4). In all experiments, we varied set size by randomly presenting 1, 2, 4, or 6 scenes to participants on each trial, and measured VWM capacity using a 2-AFC task. Participants were able to accurately extract average temperature in all experiments, with all 6 scenes being integrated into their summary statistics. This occurred without relying on VWM, as fewer than 1.2 scenes were remembered on average. These results reveal that computing cross-modal summary statistics (i.e., average temperature) does not rely on lower-level visual features. Overall, these results reveal that with minimal low-level visual information available, abstract multisensory information can be rapidly retrieved and combined from long term memory to form statistical representations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×