August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Video Quality Assessment Using Motion Silencing
Author Affiliations
  • Lark Kwon Choi
    Department of Electrical and Computer Engineering, The University of Texas at Austin
  • Alan Bovik
    Department of Electrical and Computer Engineering, The University of Texas at Austin
Journal of Vision September 2016, Vol.16, 445. doi:https://doi.org/10.1167/16.12.445
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lark Kwon Choi, Alan Bovik; Video Quality Assessment Using Motion Silencing. Journal of Vision 2016;16(12):445. https://doi.org/10.1167/16.12.445.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Salient luminance (or color, etc.) changes in stimulus are imperceptible in the presence of large, coherent object motions (Suchow and Alvarez, 2011). From a series of human subjective studies, we found that this now well-known "motion silencing" phenomenon also happens on naturalistic videos, where large motion strongly suppresses flicker visibility (Choi et al, 2015). Based on these visual change silencing effects, we have developed a new video quality assessment (VQA) model that accounts for temporal flicker masking on distorted natural videos. We have developed a flicker sensitive motion-tuned VQA framework that first linearly decomposes reference and test videos using a multiscale spatiotemporal 3D Gabor filter bank. The outputs of quadrature pairs of linear Gabor filters are squared and summed to measure motion energy. These responses are then divisively normalized to represent the nonlinearity of adaptive gain control of V1 complex cells. We capture perceptual flicker visibility by measuring locally shifted response deviations relative to those on the reference video at each subband, then define the sum of deviations as a perceptual flicker visibility index. Spatial video quality is predicted using spatial errors from each subband Gabor response and the DC subband Gaussian filter output using divisive normalization. To measure perceptual temporal video quality, flicker visibility is combined with motion-tuned space-time distortion measurement that relies on a model of motion processing in Area MT (Seshadrinathan and Bovik, 2010). Results show that the video quality predicted by the proposed VQA model correlates quite well with human subjective judgments of quality on distorted videos, and its performance is highly competitive with, and indeed exceeds, that of most recent VQA algorithms tested on the LIVE VQA database. We believe that perceptual temporal flicker masking as a form of temporal visual masking will play an increasingly important role in modern models of objective VQA.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×