October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Foveal and peripheral vision for assessing the quality of computer-generated images
Author Affiliations & Notes
  • Vasiliki Myrodia
    University of Lille, UMR CNRS 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
  • Samuel Delepoulle
    Universite Cote d Opale - LISIC - Laboratoire d Informatique Signal et Image de la Cote d Opale, Calais, France
  • Laurent Madelain
    University of Lille, UMR CNRS 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
    Aix Marseille Universite, UMR 7289 CNRS, Institut de Neurosciences de la Timone, Marseille, France
  • Footnotes
    Acknowledgements  Funding from ANR grant ANR-17-CE38-0009
Journal of Vision October 2020, Vol.20, 355. doi:https://doi.org/10.1167/jov.20.11.355
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vasiliki Myrodia, Samuel Delepoulle, Laurent Madelain; Foveal and peripheral vision for assessing the quality of computer-generated images. Journal of Vision 2020;20(11):355. https://doi.org/10.1167/jov.20.11.355.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Computer-generated images (CGIs) are commonly used in printed and electronic media. The algorithms used to produce photorealistic CGIs induce visual noise, which varies inversely with computation time. Our research aims at improving this process by decreasing the computing time without a detectable loss of visual quality. This study is based on our previous work, quantifying the 50% perception threshold (PT) for each participant. To compare foveal versus peripheral information extraction, we conducted two experiments using sets of images at different stages of computation (i.e. with various noise levels) from two different CGIs. In both experiments, each image was cut and then merged with the highest quality image (reference image; RI). Participants were asked to report whether the displayed pictures were composed of two different images or a single one in a 2AFC task. In Experiment 1 (n=20), we investigated the observer's ability to assess the image quality using only peripheral vision. For the peripheral vision, we displayed pictures composed of the RI and the PT-image. For the central vision, we used a gaze-contingent paradigm to display the highest quality image through a Gaussian transparency mask at the gaze position. The mask diameter was adjusted on each trial using a QUEST+ Bayesian adaptive method. Results indicate that a mask of about 100 pixels (3.62deg) significantly impairs the observer's ability to report a quality difference. In Experiment 2 (n=4), we recorded the observers’ scan-paths of scene exploration while performing the 2AFC task. The composed picture used three categories of images depending on the amount of noise (high, at PT, and low). Results show longer fixation durations in information-rich areas. Furthermore, participants are capable of reporting the quality difference for the high-noise and PT-images. These data reveal how visual information is extracted to detect different CGI qualities and could help optimizing CGI computation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×