Bit Allocation using Optimization
Authors: Tongda Xu, Han Gao, Chenjian Gao, Yuanyuan Wang, Dailan He, Jinyong Pi, Jixiang Luo, Ziyu Zhu, Mao Ye, Hongwei Qin, Yan Wang, Jingjing Liu, Ya-Qin Zhang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that current state-of-the-art bit allocation algorithms still have a room of 0.5 d B PSNR to improve compared with ours. |
| Researcher Affiliation | Collaboration | 1Institute for AI Industry Research (AIR), Tsinghua University 2Sense Time Research 3University of Electronic Science and Technology of China 4Beihang University 5Department of Computer Science and Technology, Tsinghua University 6Tsinghua University 7School of Vehicle and Mobility, Tsinghua University. |
| Pseudocode | Yes | Algorithm 1 Original SAVI on DAG |
| Open Source Code | Yes | Code is available at https://github.com/tongdaxu/ Bit-Allocation-Using-Optimization. |
| Open Datasets | Yes | dataset we use is MNIST (Le Cun et al., 1998). ... Following baselines, we adopt HEVC Common Testing Condition (CTC) (Bossen et al., 2013) and UVG dataset (Mercat et al., 2020). |
| Dataset Splits | No | The paper does not explicitly describe a validation dataset split used for hyperparameter tuning or early stopping during the training of the main NVC models. While it mentions optimization parameters for the SAVI process, it doesn't detail a validation phase for the dataset itself. |
| Hardware Specification | Yes | Encode time is per-frame and measured with AMD EPYC 7742 CPU and Nvidia A100 GPU. |
| Software Dependencies | Yes | For H.266, we test VTM 13.2 with lowdelay P preset. |
| Experiment Setup | Yes | We adopt Adam (Kingma & Ba, 2014) optimizer with lr = 1 10 3 to optimize y1:N for K = 2000 iterations. ... We adopt Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999, ϵ = 1e 4 and batch-size 20. ... To re-parameterize the quantization, we adopt Stochastic Gumbel Annealing with the hyper-parameters as Yang et al. (2020b). |