Variational Bayesian Quantization
Authors: Yibo Yang, Robert Bamler, Stephan Mandt
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method. |
| Researcher Affiliation | Academia | Yibo Yang * 1 Robert Bamler * 1 Stephan Mandt 1 *Equal contribution 1Department of Computer Science, University of California, Irvine. Correspondence to: Yibo Yang <yibo.yang@uci.edu>, Robert Bamler <rbamler@uci.edu>. |
| Pseudocode | Yes | Algorithm 1 Rate-Distortion Optimization for Dimension i |
| Open Source Code | Yes | Our code is available at https://github.com/mandt-lab/vbq. |
| Open Datasets | Yes | We trained the model on books published between 1980 and 2008 from the Google Books corpus (Michel et al., 2011)... We evaluate performance on the semantic and syntactic reasoning task proposed in (Mikolov et al., 2013a)... We trained a VAE on the MNIST dataset and the Frey Faces dataset... We trained the model on the same subset of the Image Net dataset as used in (Ball e et al., 2017). We evaluated performance on the standard Kodak (Kodak) dataset, a separate set of 24 uncompressed color images. The Kodak dataset URL is https://www.cns.nyu.edu/ lcv/iclr2017/. |
| Dataset Splits | No | The paper mentions using training data for certain calculations but does not explicitly provide details about train/validation/test dataset splits (e.g., percentages, sample counts, or predefined splits). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models, or cloud computing specifications used for experiments. |
| Software Dependencies | No | The paper mentions 'libjpeg' and 'Python Pillow library' but does not specify their version numbers or other software dependencies with version information. |
| Experiment Setup | Yes | The generative network parameterizes a factorized categorical or Gaussian likelihood model... and the variance σ2 is fixed as a hyper-parameter... σ2 was tuned to 0.001 to ensure the VAE achieved overall good R-D trade-off. |