Towards Efficient Image Compression Without Autoregressive Models
Authors: Muhammad Salman Ali, Yeongwoong Kim, Maryam Qamar, Sung-Chang Lim, Donghyun Kim, Chaoning Zhang, Sung-Ho Bae, Hui Yong Kim
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We train all our models on the Vimeo-90k dataset Xue et al. [2019]... We tested our model on a commonly used Kodak lossless images dataset Kodak [1993]... Figure 1: Performance-complexity tradeoff using various entropy models... Figure 7: RD rate comparison... Table 1: Average encoding and decoding time... Ablation Studies: Comprehensive ablation studies regarding various mask types, mask sizes, and α values are presented in the supplementary material. |
| Researcher Affiliation | Academia | Muhammad Salman Ali 1, Yeongwoong Kim1, Maryam Qamar 1, Sung-Chang Lim2, Donghyun Kim2, Chaoning Zhang1, Sung-Ho Bae 1, Hui Yong Kim 1 1 Kyung Hee University, Republic of Korea 2 Electronics and Telecommunications Research Institute (ETRI), Republic of Korea |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'We perform all our experiments on the Pytorch framework Paszke et al. [2017] and use the Compress AI library Bégaint et al. [2020].' but does not provide an explicit statement or link for the authors' own source code for the methodology described. |
| Open Datasets | Yes | We train all our models on the Vimeo-90k dataset Xue et al. [2019]... We tested our model on a commonly used Kodak lossless images dataset Kodak [1993]... |
| Dataset Splits | No | The paper states it trains on the Vimeo-90k dataset and tests on the Kodak dataset, but it does not specify a separate validation split for either dataset. |
| Hardware Specification | Yes | Minnen s and Cheng s models were trained using an NVIDIA 2080Ti, whereas the Swin T model was trained on an NVIDIA 3070Ti due to the transformers high memory requirement. |
| Software Dependencies | No | The paper mentions 'Pytorch framework Paszke et al. [2017] and use the Compress AI library Bégaint et al. [2020]' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | The models were optimized using the Adam optimizer Kingma and Ba [2015] with a batch size of 16 and trained for 1.5 million iterations with a learning rate of 1 10 4 for the first million iterations and then halved every 50,000 iterations till 1.25 million iterations. ... The rate-distortion tradeoff is guided by λ, whose value is contained in the set [0.0009, 0.0018, 0.0035, 0.0067, 0.0130, 0.0250]. |