Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Leveraging the Exact Likelihood of Deep Latent Variable Models
Authors: Pierre-Alexandre Mattei, Jes Frellsen
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we investigate the empirical realisations of our theoretical ๏ฌndings on DLVMs. |
| Researcher Affiliation | Academia | Pierre-Alexandre Mattei Department of Computer Science IT University of Copenhagen EMAIL Jes Frellsen Department of Computer Science IT University of Copenhagen EMAIL |
| Pseudocode | Yes | Algorithm 1 Metropolis-within-Gibbs sampler for missing data imputation using a trained VAE |
| Open Source Code | No | The paper states 'The code was written in Python and uses the PyTorch framework' in Appendix E, but does not provide any link or explicit statement about the code being open-sourced or publicly available. |
| Open Datasets | Yes | We train two DLVMs on the Frey faces data set... We compare the two samplers for single imputation of the test sets of three data sets: Caltech 101 Silhouettes and statically binarised versions of MNIST and OMNIGLOT. |
| Dataset Splits | No | The paper states: 'Convergence and mixing of the chains can be monitored using a validation set of complete data.' However, it does not specify any details regarding the size, percentages, or methodology of this validation split. |
| Hardware Specification | Yes | All models were trained on NVIDIA GeForce GTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper states 'The code was written in Python and uses the PyTorch framework.' (Appendix E) but does not provide specific version numbers for either Python or PyTorch. |
| Experiment Setup | Yes | The models were trained with the Adam optimizer with a learning rate of 10-3. Batch sizes of 100 were used. The latent dimension was set to 50, and all hidden layers had 200 units. The ELBO was maximized using stochastic gradient ascent for 1000 epochs on MNIST and OMNIGLOT, and 2000 epochs on Caltech Silhouettes. Frey faces training ran for 10000 epochs with a learning rate of 10-4 and a batch size of 50. The noise level ฮพ was set to 2-4. |