Flexible Neural Image Compression via Code Editing

Authors: Chenjian Gao, Tongda Xu, Dailan He, Yan Wang, Hongwei Qin

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Results
Researcher Affiliation Collaboration Chenjian Gao 1, Tongda Xu 1,2, Dailan He1, Hongwei Qin1, Yan Wang1,2 1Sense Time Research, 2Institute for AI Industry Research (AIR), Tsinghua University
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the provided text.
Open Source Code No The paper's ethics statement explicitly states: "Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]"
Open Datasets Yes Following He et al. [2021], we train the baseline models on a subset of 8,000 images from Image Net. All experiments involving Code Editing are based on the model trained with λ0 = 0.015. In ablation study, all the results are tested on Kodak dataset. Fig. 4 shows the multi-distortion trade-off results based on Ballé et al. [2018] on the Kodak dataset. Further, we show qualitative results based on Ballé et al. [2018] on the CLIC2022 dataset [CLIC, 2022] in Fig. 5.
Dataset Splits No The paper mentions training on 'a subset of 8,000 images from Image Net' and testing on 'Kodak dataset', but it does not specify explicit training/validation/test dataset splits (e.g., percentages or sample counts) within the provided text, nor does it specify how the Kodak dataset is split if it's not a single test set.
Hardware Specification No The paper's ethics statement indicates that hardware specifications were included ('[Yes]' to 'Did you include the total amount of compute and the type of resources used...'), but these details (e.g., specific GPU models, CPU types) are not present in the main text provided. They may be in Appendix C, which is not available.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other library versions).
Experiment Setup Yes All baseline models are trained using the Adam optimizer for 2000 epochs. Batchsize is set to 16 and the initial learning rate is set to 10-4. For each image x to be compressed, we initialize y fϕλ0(x) and optimize y for 2,000 iterations using SGA, with the learning rate of 5e-3.