Integer Networks for Data Compression with Latent-Variable Models

Authors: Johannes Ballé, Nick Johnston, David Minnen

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested compression and decompression on four different platforms (two CPU platforms and two GPU platforms) and two different datasets, Tecnick (Asuni and Giachetti, 2014) and CLIC (2018). The original model fails to correctly decompress more than half of the images on average when compression and decompression occurs on different platforms. The modified model brings the failure rate down to 0% in all cases.
Researcher Affiliation Industry Johannes Ballé, Nick Johnston & David Minnen Google Mountain View, CA 94043, USA {jballe,nickj,dminnen}@google.com
Pseudocode No The paper describes methods using mathematical equations and prose, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We tested compression and decompression on four different platforms (two CPU platforms and two GPU platforms) and two different datasets, Tecnick (Asuni and Giachetti, 2014) and CLIC (2018). The rate distortion performance of the model was assessed on Kodak (1993).
Dataset Splits No The paper mentions datasets used for training and evaluation but does not provide specific details on how the datasets were split into training, validation, and test sets (e.g., percentages, sample counts, or explicit standard splits).
Hardware Specification Yes CPU 1: Intel Xeon E5-1650 GPU 1: NVIDIA Titan X (Pascal) CPU 2: Intel Xeon E5-2690 GPU 2: NVIDIA Titan X (Maxwell)
Software Dependencies No The paper mentions programming languages (e.g., C) and implicitly uses deep learning frameworks, but it does not provide specific names and version numbers for software dependencies or libraries (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup No The paper states, 'We used the same network architectures in terms of number of layers, filters, etc., and the same training parameters as in the original paper.' However, it does not explicitly list these training parameters (e.g., learning rate, batch size, number of epochs) within its main text, requiring reference to another publication.