COMPOUND GAIN: A visual distinctness metric for coder performance evaluation
Table of Contents
Various distinctness metrics have been proposed to compare and rank target detectability, and to quantify background or scene complexity. As a matter of fact it is of great practical value to have computational visual differences or distinctness measures which can be applied to evaluate image displays, (virtual) scene generators, image compression methods, image reproduction methods, camouflage measures, and traffic safety devices. Relevant computational models of early human vision typically process an input image through various bandpass filters and analyze first order statistical properties of the filtered images to compute a target distinctness metric. If they give good predictors of target saliency for humans performing visual search and detection tasks, they may be used to compute visual distinctness of image subregions (target areas) from digital imagery.
Target saliency for humans performing visual search and detection tasks can be estimated by the difference between the signal from the target-and-background scene and the signal from the background with no target. It often happens that the structure of a certain scene cannot be determined exactly due to various reasons (e.g, it is possible that some of the details may not be observable or the observer who makes an attempt to investigate the structure may no take all the relevant factors governing the structure into consideration). Under such circumstances, the structure of the reference image and the input image can be characterized statistically by discrete probability distributions. Then, since the certain relationship such that greater visual distinctness implies lesser recognition response times, the problem of predicting recognition times for humans performing visual search and detection tasks, can be reformulated as: What is the amount of relative information gain between the respective probability distributions?
generalization of the Kullback-Leibler joint information gain of various
random variables (called as compound gain) is a measure of information
gain between two images such that, it satisfies a series of postulates
which are natural and thus desirable ("Information
theoretic measure for visual target distinctness" J.A. García,
J. Fdez-Valdivia, X.R. Fdez-Vidal, Rosa Rodriguez-Sánchez. IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol. 23 (4) pp. 362-383.
The form of the compound gain (CG) between a test image I and decoded O outcome is:
with being the significant locations of the test image I; g denotes the grey level; being the local histogram computed on a neighborhood of location in the test image I; being the local histogram computed on a neighborhood of in the decoded outcome O. In the above equation, and denote the events that the feature at location is highly significant in order to explain the information content of the test I image and the reconstruction O, respectively; and being the a priori probabilities of occurrence of and , respectively.
Given any coding scheme the CG may then be applied to quantify the visual distinctness by means of the difference between the original image and decoded images at various bit rates. It allows us to analyze the behavior of coders from the viewpoint of the visual distinctness of their decoded outputs, taking into account that an optimal coder in this sense tends to produce the lowest value of the CG.
results in ("Information theoretic
measure for visual target distinctness" J.A. García, J. Fdez-Valdivia,
X.R. Fdez-Vidal, Rosa Rodriguez-Sánchez. IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 23 (4) pp. 362-383. (2001))
we can conclude that the compound gain appears to relate to visual target
distinctness as perceived by human observers. This result implies that
the CG can be used to predict visual distinctness of targets in complex
backgrounds from digital imagery. This finding may eliminate the need
for psychophysical experiments, which are time consuming, and sometimes
even impossible to perform.
Directory Structure and Building the Software
Compund Error Gain is available via anonymous ftp from ftp://decsai.ugr.es/pub/cvg/software/cg.tar.gz. This is a gzip'ed tarfile.
For getting and uncompressing the tar do:
The CGerror software is intended to be built using the file "Makefile".
Using the SoftwareThe CGerror software consists only in one program:
The CGerror Command
This experiment was designed to analyze the comparative performance of the PSNR and the CG for predicting visual (subjective) quality of reconstructed images using several compression methods.
In the next figures (click here), show the respective reconstructed test images at 0.5, 0.25, and 0.16 bits per pixel (bpp).
Thirteen volunteers, nonexperts in image compression, subjectively evaluated the reconstructed images using an ITU-R Recommendation ("Broadcasting service (television) Recommendation 500-10". Supplement 3, vol 1997. BT series 2000 edition). The ITU-R 500-10 recommends to classify the test pictures into five different quality groups:
The method of assessment was cyclic in that the assessor was first presented with the original picture, then with the same picture but decoded at a bitrate. Following this he was asked to vote on the second one, keeping the original in mind. The assessor was presented with a series of pictures at different bitrates in random order to be assessed. At the end of the series of sessions, the mean score for each decoded picture was calculated. The next table summarizes the mean quality factors for different decoded outputs using the compression methods (data in graphical format).
The next figures (click here) show 2D plots on rate-distortion as given by the PSNR and the CG for CORAL, REWIC, JPEG200 and SPIHT at 0.5, 0.25, and 0.16 bpp.
Summarizing, it seems that whereas the PSNR gives a poor measure of image quality, the CG is a good predictor of visual fidelity for humans performing subjective comparisons. For example, the PSNR predicts that the SPIHT results in a higher image fidelity than both CORAL and REWIC, which does not appear to correlate with subjective quality estimated by human observers. On the contrary, the overall impression is that, as predicted by the compound gain, the CORAL and REWIC schemes result in a higher image fidelity than SPIHT, which correlates with subjective fidelity by humans. Also, the CG predicts a better visual fidelity using CORAL than with the JPEG2000 reconstructed image at 0.5 and 0.25 bpp, which correlates with the subjective image quality measured by human beings, although the JPEG2000 gives a better PSNR performance than CORAL at the same bit rates. As said before, since the CORAL and REWIC do not attempt to minimize MSE, cannot be expected to prove its worth with a curve of PSNR versus bit rate.
Experiment 2: Comparative performance of the CG-rate curves for the JPEG2000, CORAL, SPIHT, and REWIC coding algorithms
this section we compare the performance in rate-distortion sense of
the JPEG2000, CORAL, SPIHT, and REWIC, where
the distortion is the compound gain (CG).
Results were obtained without entropy-coding the bits put out with
the CORAL, REWIC, and SPIHT schemes. Test here reported were performed
on a dataset composed of 49 standard 512x512 grayscale
The result of the comparison ( CG curves for every test image) show the respective 2D plots on rate-distortion as given by the CG for JPEG2000, CORAL, SPIHT, and REWIC. The compression ratio ranges from 64:1 to 16:1. The CORAL, SPIHT, and REWIC were not improved by entropy-coding their outputs, and thus, the bitstreams put out are binary uncoded (without entropy coding)."
If you are unfortunate enough to encounter any problems with CG software, please send a bug report to email@example.com . Always be sure to include the following information in a bug report:
These programs are Copyright © 2001 by Universidad de Granada, Xose R. Fdez-Vidal et al. They may not be redistributed without the consent of the copyright holders. In no circumstances may the copyright notice be removed. Permission to use, copy, or modify this software and its documentation for educational and research purposes only and without fee is hereby granted, provided that this copyright notice and the original authors' names appear on all copies and supporting documentation. For any other uses of this software, in original or modified form, including but not limited to distribution in whole or in part, specific prior permission must be obtained from the authors. These programs shall not be used, rewritten, or adapted as the basis of a commercial software or hardware product without first obtaining appropriate licenses from authors. Each program is provided as is, without any express or implied warranty, without even the warranty of fitness for a particular purpose.