Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff
Authors: Yochai Blau, Tomer Michaeli
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now turn to demonstrate the visual implications of the rate-distortion-perception tradeoff in lossy image compression on a toy MNIST example. |
| Researcher Affiliation | Academia | 1Technion Israel Institute of Technology, Haifa, Israel. Correspondence to: Yochai Blau <yochai@campus.technion.ac.il>, Tomer Michaeli <tomer.m@ee.technion.ac.il>. |
| Pseudocode | No | The paper describes the procedures and algorithms in textual form, but it does not include any clearly labeled pseudocode blocks or algorithm figures. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We train 98 encoder-decoder pairs on the MNIST handwritten digit dataset (Le Cun et al., 1998) |
| Dataset Splits | No | The paper mentions using the MNIST handwritten digit dataset and refers to 'test samples' for evaluation, but it does not specify explicit train/validation/test dataset split percentages, absolute sample counts, or a detailed splitting methodology needed for reproduction. |
| Hardware Specification | No | The paper describes the experimental setup and training procedures but does not provide specific details about the hardware used, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions various deep learning concepts and methods (e.g., 'deep neural nets', 'Wasserstein GAN'), citing relevant research papers. However, it does not specify version numbers for any software dependencies, such as programming languages (e.g., Python version) or libraries (e.g., TensorFlow, PyTorch versions). |
| Experiment Setup | No | The paper states, 'A list of all combinations of (dim, L, λ) used, along with all other training details can be found in the Supplementary Material.', indicating that the specific hyperparameter values and full training configurations are not present in the main text. |