Soft then Hard: Rethinking the Quantization in Neural Image Compression
Authors: Zongyu Guo, Zhizheng Zhang, Runsen Feng, Zhibo Chen
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our proposed methods are easy to adopt, stable to train, and highly effective especially on complex compression models. |
| Researcher Affiliation | Academia | Zongyu Guo 1 Zhizheng Zhang 1 Runsen Feng 1 Zhibo Chen 1 1University of Science and Technology of China. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We train the compression models on the full Image Net training set (Deng et al., 2009) and test the rate-distortion performance on Kodak dataset (Kodak, 1993), a widely used dataset for evaluating the performance of image compression model. |
| Dataset Splits | No | The paper mentions training on ImageNet and testing on Kodak, but does not explicitly provide details about a validation dataset split (e.g., percentages, sample counts, or a specific validation set name). |
| Hardware Specification | Yes | All experiments are run on NVIDIA A100 GPUs. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific version numbers for any software dependencies like programming languages, frameworks, or libraries. |
| Experiment Setup | Yes | We strictly follow the settings in (Cheng et al., 2020), including their hyper-parameters (e.g., learning rate and batch size) and network architectures. We train our models with Adam optimizer for 2M iterations. The learning rate is set to 10e-4 initially and decays to 10e-5 at 1.8M iterations. |