A Theory of the Distortion-Perception Tradeoff in Wasserstein Space

Authors: Dror Freirich, Tomer Michaeli, Ron Meir

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we illustrate our results, numerically and visually, in a super-resolution setting in Section 5. The proofs of all our theorems are provided in Appendix B. In Fig. 3 we plot each method on the distortion-perception plane. We consider the EDSR method [14] to constitute a good approximation for the minimum MSE estimator X since it achieves the lowest MSE among the evaluated methods. We therefore estimate the lower bound (9) as ˆD(P) = DEDSR + [(PEDSR P)+]2 , where DEDSR is the MSE of EDSR, and PEDSR is the estimated Gelbrich distance between EDSR reconstructions and ground-truth images.
Researcher Affiliation Academia Dror Freirich Technion Israel Institute of Technology drorfrc@gmail.com Tomer Michaeli Technion Israel Institute of Technology tomer.m@ee.technion.ac.il Ron Meir Technion Israel Institute of Technology rmeir@ee.technion.ac.il
Pseudocode No The paper describes algorithms and derivations mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code Yes All codes are freely available and provided by the authors.
Open Datasets Yes We compute distortion and perception indices for 13 super resolution algorithms in a 4 magnification task on the BSD100 dataset2 [16].
Dataset Splits No The paper mentions evaluating on the BSD100 dataset but does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper describes the overall approach and the construction of optimal estimators via interpolation, but it does not specify concrete hyperparameters or system-level training settings for their experimental setup (e.g., learning rate, batch size, epochs).