Abstract
When an image is degraded by noise or blur, an image enhancement technique can be used to push it toward its original appearance. This might mean either: (a) getting the individual pixels to resemble those of the original, or (b) getting the overall “texture” (e.g., noisiness or sharpness) to resemble that of the original. Most techniques are designed to get the pixels right, and use an objective error criterion such as mean squared error or a variant. In superresolution, the goal is to hallucinate missing high frequencies, especially those belonging to sharp edges. If the hallucinated edge position is slightly wrong, the error will be large. Therefore the best strategy may be to leave the edges soft, but then the goal of restoring sharpness has been lost. We argue that textural similarity is a valid additional criterion, both perceptually and statistically. If we know that our image came from a set of images with certain textural statistics, then we can impose them as a prior. An image with sharp hallucinated edges gets points for looking sharp, which can balance the points lost due to misalignments or other errors. We have developed a superresolution technique that learns, based on a training set, to impose local estimates of subband coefficients as well as global estimates of subband histograms. We had subjects compare our enhanced images with those produced by competing techniques, including commercial superresolution software. Our images were judged significantly better looking than the competition.