Coarse-to-Fine Hyper-Prior Modeling for Learned Image Compression
Authors: Yueyu Hu, Wenhan Yang, Jiaying Liu11013-11020
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show the effectiveness of the proposed network to efficiently reduce the redundancies in images and improve the rate-distortion performance, especially for high-resolution images. Our project is publicly available at https://huzi96.github.io/coarse-to-fine-compression.html. We conduct experiments to compare the proposed method with existing learning-based and hybrid image compression methods. The experimental results include the Rate-Distortion performance and parallel acceleration analysis. We evaluate the Rate-Distortion (R-D) performance on the publicly available Kodak image set (Kodak 1993) and Tecnick SAMPLING image set (Asuni and Giachetti 2014). |
| Researcher Affiliation | Academia | Yueyu Hu, Wenhan Yang, Jiaying Liu Wangxuan Institute of Computer Technology, Peking University, Beijing, China {huyy, yangwenhan, liujiaying}@pku.edu.cn |
| Pseudocode | No | The paper describes network architectures and processes in text and diagrams (Figure 1, Figure 2, Figure 3, Table 1) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | Our project is publicly available at https://huzi96.github.io/coarse-to-fine-compression.html. This URL leads to a project demonstration page, not directly to a specific code repository. |
| Open Datasets | Yes | We train the network using the DIV2K dataset (Agustsson and Timofte 2017), which contains high-resolution images that have not been lossily compressed. We evaluate the Rate-Distortion (R-D) performance on the publicly available Kodak image set (Kodak 1993) and Tecnick SAMPLING image set (Asuni and Giachetti 2014). |
| Dataset Splits | No | The paper mentions training on the DIV2K dataset and evaluating on Kodak and Tecnick datasets but does not explicitly provide details about training/validation/test splits, percentages, or counts for any of these datasets. |
| Hardware Specification | No | The paper mentions 'current large-scale parallel computing devices, e.g. GPU' and 'CPU' for speedup analysis but does not specify any particular hardware models (e.g., specific GPU or CPU names) used for experiments. |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma and Ba 2015) as the optimizer' but does not specify version numbers for any software dependencies or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train the network using Adam (Kingma and Ba 2015) as the optimizer, with the initial learning rate set to 10 4. In the final stage of training, we reduce the learning rate by 0.5 after every 100,000 iterations. |