Multi-Sample Training for Neural Image Compression

Authors: Tongda Xu, Yan Wang, Dailan He, Chenjian Gao, Han Gao, Kunzan Liu, Hongwei Qin

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that it improves sota NIC methods. Our MS-NIC is plug-and-play, and can be easily extended to other neural compression tasks. ... We demonstrate the efficiency of MS-NIC through experimental results on sota NIC methods.
Researcher Affiliation Collaboration Tongda Xu1,2, Yan Wang1,2 , Dailan He1, Chenjian Gao1,3, Han Gao1,4, Kunzan Liu1,5, Hongwei Qin1 1Sense Time Research, 2Institute for AI Industry Research (AIR), Tsinghua University, 3Beihang University, 4University of Electronic Science and Technology of China, 5Department of Electronic Engineering, Tsinghua University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]
Open Datasets Yes Following He et al. [2022], we train all the models on the largest 8000 images of Image Net [Deng et al., 2009], followed by a downsampling according to Ballé et al. [2018].
Dataset Splits Yes Following He et al. [2022], we train all the models on the largest 8000 images of Image Net [Deng et al., 2009], followed by a downsampling according to Ballé et al. [2018].
Hardware Specification No 3. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] However, the paper does not specify the *exact* hardware models (e.g., specific GPU or CPU names/versions) used, only confirming that this information was generally included.
Software Dependencies No The paper does not explicitly mention software dependencies with specific version numbers.
Experiment Setup Yes For the experiments based on Ballé et al. [2018] (include Tab. 1, Tab. 2), we follows the setting of the original paper except for the selection of λs, For the selection of λs, we set λ {0.0016, 0.0032, 0.0075, 0.015, 0.03, 0.045, 0.08} as suggested in Cheng et al. [2020]. And for the experiments based on Cheng et al. [2020], we follows the setting of original paper. More detailed experimental settings can be found in Appendix. A.4.