Purchase this article with an account.
Vasiliki Myrodia, Samuel Delepoulle, Laurent Madelain; Foveal and peripheral vision for assessing the quality of computer-generated images. Journal of Vision 2020;20(11):355. doi: https://doi.org/10.1167/jov.20.11.355.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Computer-generated images (CGIs) are commonly used in printed and electronic media. The algorithms used to produce photorealistic CGIs induce visual noise, which varies inversely with computation time. Our research aims at improving this process by decreasing the computing time without a detectable loss of visual quality. This study is based on our previous work, quantifying the 50% perception threshold (PT) for each participant.
To compare foveal versus peripheral information extraction, we conducted two experiments using sets of images at different stages of computation (i.e. with various noise levels) from two different CGIs. In both experiments, each image was cut and then merged with the highest quality image (reference image; RI). Participants were asked to report whether the displayed pictures were composed of two different images or a single one in a 2AFC task.
In Experiment 1 (n=20), we investigated the observer's ability to assess the image quality using only peripheral vision. For the peripheral vision, we displayed pictures composed of the RI and the PT-image. For the central vision, we used a gaze-contingent paradigm to display the highest quality image through a Gaussian transparency mask at the gaze position. The mask diameter was adjusted on each trial using a QUEST+ Bayesian adaptive method. Results indicate that a mask of about 100 pixels (3.62deg) significantly impairs the observer's ability to report a quality difference.
In Experiment 2 (n=4), we recorded the observers’ scan-paths of scene exploration while performing the 2AFC task. The composed picture used three categories of images depending on the amount of noise (high, at PT, and low). Results show longer fixation durations in information-rich areas. Furthermore, participants are capable of reporting the quality difference for the high-noise and PT-images.
These data reveal how visual information is extracted to detect different CGI qualities and could help optimizing CGI computation.
This PDF is available to Subscribers Only