Variational image compression with a scale hyperprior
Authors: Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the compression performance of our proposed models, we conducted a number of experiments using the TensorFlow framework. We set up the transforms ga, gs, ha, and hs as alternating compositions of linear and nonlinear functions, as is common in artificial neural networks (Figure 4). and We evaluate the compression performance of all models on the publicly available Kodak dataset (Eastman Kodak, 1993). |
| Researcher Affiliation | Industry | Johannes Ballé jballe@google.com David Minnen dminnen@google.com Saurabh Singh saurabhsingh@google.com Sung Jin Hwang sjhwang@google.com Nick Johnston nickj@google.com Google Mountain View, CA 94043, USA |
| Pseudocode | No | The paper includes network architecture diagrams (Figure 4) but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct statement about releasing their own source code, nor does it include a link to a repository for their specific methodology. It only references the TensorFlow implementation of a third-party component they used. |
| Open Datasets | No | The models were trained on 'approximately 1 million images scraped from the world wide web' that were then processed (downsampled, cropped). While the evaluation uses the public Kodak dataset, no concrete access information (link, DOI, citation with author/year) is provided for the specific training dataset used. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) for training, validation, or test sets on the custom dataset used for training, nor for the Kodak dataset used for evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, only mentioning that experiments were conducted 'using the TensorFlow framework'. |
| Software Dependencies | No | The paper mentions using the 'TensorFlow framework' but does not specify a version number. While it links to TensorFlow documentation for GDN/IGDN, it does not explicitly state the version of TensorFlow or any other software dependencies with specific version numbers. |
| Experiment Setup | Yes | Minibatches of 8 of these crops at a time were used to perform stochastic gradient descent using the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 10^-4. and N = 128 and M = 192 for the 5 lower values, and N = 192 and M = 320 for the 3 higher values. |