Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Joint Autoregressive and Hierarchical Priors for Learned Image Compression
Authors: David Minnen, Johannes Ballé, George D. Toderici
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our generalized models by calculating the rate distortion (RD) performance averaged over the publicly available Kodak image set [21]2. Figure 2 shows RD curves using peak signalto-noise ratio (PSNR) as the image quality metric. [...] The combined model yields state-of-the-art rate distortion performance and generates smaller files than existing methods: 15.8% rate reductions over the baseline hierarchical model and 59.8%, 35%, and 8.4% savings over JPEG, JPEG2000, and BPG, respectively. |
| Researcher Affiliation | Industry | David Minnen, Johannes Ballé, George Toderici Google Research EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode blocks or sections explicitly labeled 'Algorithm'. |
| Open Source Code | No | The paper does not provide an explicit statement about making its source code publicly available or include any links to a code repository. |
| Open Datasets | Yes | We evaluate our generalized models by calculating the rate distortion (RD) performance averaged over the publicly available Kodak image set [21]2. |
| Dataset Splits | No | The paper mentions training and evaluating on datasets, but it does not specify explicit percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not specify any details about the hardware (e.g., GPU models, CPU specifications, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions, or specific libraries). |
| Experiment Setup | Yes | Details about the individual network layers in each component of our models are outlined in Table 1. [...] Optimized with λ = 0.025 (bpp 0.61 on Kodak), the baseline outperforms the other variants we tested (see text for details). |