Fast Relative Entropy Coding with A* coding

Authors: Gergely Flamich, Stratis Markou, Jose Miguel Hernandez-Lobato

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide experimental evidence that AD* also has O(D [Q P]) expected runtime. We prove that AS* and AD* achieve an expected codelength of O(DKL[Q P]). Further, we introduce DAD*, an approximate algorithm based on AD* which retains its favourable runtime and has bias similar to that of alternative methods. Focusing on VAEs, we propose the Iso KL VAE (IKVAE), which can be used with DAD* to further improve compression efficiency. We evaluate A* coding with (IK)VAEs on MNIST, showing that it can losslessly compress images near the theoretically optimal limit.
Researcher Affiliation Collaboration 1Department of Engineering, University of Cambridge, Cambridge, UK 2Microsoft Research, Cambridge, UK 3Alan Turing Institute, London, UK.
Pseudocode Yes Algorithm 1: A* coding. Blue parts show modifications of A* sampling (Maddison et al., 2014).
Open Source Code Yes We make this approximatant method available in our code repository.
Open Datasets Yes We evaluate A* coding with (IK)VAEs on MNIST, showing that it can losslessly compress images near the theoretically optimal limit.
Dataset Splits No The paper mentions using MNIST for image compression experiments but does not explicitly provide specific details on training, validation, or test splits (percentages or counts).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments.
Experiment Setup Yes For DAD*, we set κ = 2 based on preliminary experiments. We compared the performance of AD* and DAD* on image compression experiments on MNIST, using the feedforward VAE architecture of Townsend et al. (2018), with Gaussian Q and P.