Improving Inference for Neural Image Compression
Authors: Yibo Yang, Robert Bamler, Stephan Mandt
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method. |
| Researcher Affiliation | Academia | Yibo Yang, Robert Bamler, Stephan Mandt Department of Computer Science University of California, Irvine {yibo.yang, rbamler, mandt}@uci.edu |
| Pseudocode | Yes | Algorithm 1: Proposed lossy bits-back coding method (Section 3.3 and Figure 1e-f). |
| Open Source Code | No | The paper does not provide any statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | We improve its performance drastically, achieving an average of over 15% BD rate savings on Kodak and 20% on Tecnick [Asuni and Giachetti, 2014]Eastman Kodak. Kodak lossless true color image suite (Photo CD PCD0992). URL http://r0k. us/graphics/kodak.N. Asuni and A. Giachetti. TESTIMAGES: A large-scale archive for testing visual devices and basic image processing algorithms (SAMPLING 1200 RGB set). In STAG: Smart Tools and Apps for Graphics, 2014. URL https://sourceforge.net/projects/testimages/files/OLD/ OLD_SAMPLING/testimages.zip. |
| Dataset Splits | No | The paper uses the Kodak and Tecnick datasets for experiments but does not provide specific training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (GPU/CPU models, memory, etc.) used for running its experiments. |
| Software Dependencies | Yes | In all results, we used Adam [Kingma and Ba, 2015] for optimization, and annealed the temperature of SGA by an exponential decay schedule, and found good convergence without per-model hyperparameter tuning. |
| Experiment Setup | Yes | In all results, we used Adam [Kingma and Ba, 2015] for optimization, and annealed the temperature of SGA by an exponential decay schedule, and found good convergence without per-model hyperparameter tuning. We provide details in the Supplementary Material. |