Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics

Authors: Kamélia Daudel, Joe Benton, Yuyang Shi, Arnaud Doucet

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we illustrate our theoretical claims over toy and real-data examples. 6. Numerical Experiments In this section, our goal is to verify the validity of the theoretical results we established over several numerical experiments, starting with a Gaussian example in which the distribution of the weights is exactly log-normal. 6.1 Gaussian Example 6.2 Linear Gaussian Example 6.3 Variational Auto-encoder We consider the case of a variational auto-encoder (VAE) model designed to generate MNIST digits with a d-dimensional latent space
Researcher Affiliation Academia Kamelia Daudel EMAIL Joe Benton* EMAIL Yuyang Shi* EMAIL Arnaud Doucet EMAIL Department of Statistics, University of Oxford Oxford OX1 3TG, United Kingdom
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. The methods are described using mathematical equations and prose.
Open Source Code No The paper does not provide any explicit statements about releasing code, nor does it include links to a code repository in the main text or supplementary materials.
Open Datasets Yes 6.3 Variational Auto-encoder We consider the case of a variational auto-encoder (VAE) model designed to generate MNIST digits
Dataset Splits Yes We plot in Figure 21 the NLL estimate on the MNIST test set as a function of α after training VAEs on the MNIST training set using either the reparameterized ( rep ) or the doubly-reparameterized ( drep ) gradient estimators of the VR-IWAE objective with N = 10, 100 and d = 50. Here, all the models are trained for 1000 epochs using the Adam optimizer with learning rate 1e 3 and batch size 100.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. It refers to 'computational budget' but without concrete hardware information.
Software Dependencies No Here, all the models are trained for 1000 epochs using the Adam optimizer with learning rate 1e 3 and batch size 100. While 'Adam optimizer' is mentioned, no version number for Adam or any other software library (e.g., Python, PyTorch, TensorFlow) is provided.
Experiment Setup Yes Here, all the models are trained for 1000 epochs using the Adam optimizer with learning rate 1e 3 and batch size 100.