August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Correcting for induction phenomena on displays of different size
Author Affiliations
  • Marcelo Bertalmeo
    Universitat Pompeu Fabra, Spain
  • Thomas Batard
    Universitat Pompeu Fabra, Spain
  • Jihyun Kim
    Universitat Pompeu Fabra, Spain
Journal of Vision September 2016, Vol.16, 224. doi:https://doi.org/10.1167/16.12.224
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marcelo Bertalmeo, Thomas Batard, Jihyun Kim; Correcting for induction phenomena on displays of different size . Journal of Vision 2016;16(12):224. https://doi.org/10.1167/16.12.224.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In visual perception, induction designates the effect by which the lightness and chroma of a stimulus are affected by its vicinity, shifting towards its surround (assimilation) or away from it (contrast) depending on stimulus size (Helson 1963, Fach and Sharpe 1986). When looking at an image on a display, observers commonly place themselves at a distance from the screen so that the angle of view is larger for larger displays. As a consequence, the visual induction phenomena will also change from one screen to another: the same image may show significant assimilation effects when viewed on a mobile phone, and less assimilation or even contrast when viewed in the cinema. In this work we introduce a model for visual induction based on efficient coding that extends that of (Bertalmío 2014). Given an input image, we convolve it with a kernel which should depend on the apparent image size, and the resulting image qualitatively replicates psychophysical data. This allows us to propose the following method by which an image could be pre-processed in a screen-size dependent way so that its perception, in terms of visual induction, may remain constant across different displays: -convolution of image I with kernel Sj produces image Oj that predicts the appearance of I at angle of view j, j=1,2; -a compensation kernel C12 is defined as the inverse Fourier transform of the ratio of the Fourier transforms of S1 and S2; -convolving I with C12 produces image Ip, and now convolution of this new image with S2 will produce O1, same as I with S1; in other words, if I was intended for screen 1, we can pre-process it with C12 to obtain Ip so that when Ip is viewed in screen 2 it has the same induction effects as I in screen 1.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×