Journal of Vision Cover Image for Volume 21, Issue 9
September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
A computational model to predict the visibility of alpha-blended images
Author Affiliations
  • Taiki Fukiage
    NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation
  • Takeshi Oishi
    Institute of Industrial Science, The University of Tokyo
Journal of Vision September 2021, Vol.21, 2493. doi:https://doi.org/10.1167/jov.21.9.2493
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Taiki Fukiage, Takeshi Oishi; A computational model to predict the visibility of alpha-blended images. Journal of Vision 2021;21(9):2493. https://doi.org/10.1167/jov.21.9.2493.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Alpha blending is often used as a means of semi-transparently rendering an image over another background image. However, this blending technique has a problem in that the visibility of the blended foreground image depends on the background image. In particular, if the background contains a high-contrast texture, the visibility of the foreground image is greatly impaired by contrast masking. Therefore, it would be desirable to be able to adaptively adjust the blending parameter (alpha) to compensate for the masking effect. For this purpose, a model that can predict the visibility of alpha-blended images is necessary. In this study, we tested the effectiveness of early spatial vision models that can explain contrast masking as candidates for the prediction model. As experimental stimuli, we used alpha-blended images generated from various types of images such as textures, natural scenes, and artworks. The visibility matching task was used to measure visibility, where the participants matched the visibility of the foreground patch in the test image to that of the reference by adjusting the alpha value of the test image. There were two types of conditions: matching between images with the same foreground patches and matching between images with different foreground patches. 2000 different combinations of image patches were used for each condition. As a result, we found that the conventional spatial vision model could not predict the matching data well. To explain the data, we propose a content-adaptive feature aggregation mechanism which adaptively weights image features, such as spatial frequency and color information, based on the original appearance of the foreground image when aggregating those features into a single visibility level. We will show that this adaptive weighting mechanism is important for accurately predicting the visibility of arbitrary images through ablation studies.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×