Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes
Authors: Ba-Hien Tran, Babak Shahbaba, Stephan Mandt, Maurizio Filippone
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on a range of experiments focusing on dynamic representation learning and generative modeling, demonstrating the strong performance of our approach in comparison to existing methods that combine Gaussian Processes and autoencoders. |
| Researcher Affiliation | Academia | 1Department of Data Science, EURECOM, France 2Departments of Statistics and Computer Science, University of California, Irvine, USA. |
| Pseudocode | Yes | Algorithm 1: Inference for BAEs with SGHMC |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the methodology described, nor does it include links to a code repository. |
| Open Datasets | Yes | We follow the data generation procedure of Jazbec et al. (2021), in which a squared-exponential GP kernel with a lengthscale l = 2 was used. Notice that, unlike Jazbec et al. (2021), we generate a fixed number of 35 videos for training and another 35 videos for testing... In the next experiment, we consider a large-scale benchmark of conditional generation...a rotated MNIST dataset (N = 4050). |
| Dataset Splits | Yes | Unlike Jazbec et al. (2021), we generate a fixed number of 35 videos for training and another 35 videos for testing. |
| Hardware Specification | Yes | All experiments were conducted on a server equipped with a Tesla T4 GPU having 16 GB RAM. |
| Software Dependencies | No | The paper mentions 'We use an Adam optimizer (Kingma & Ba, 2015)' but does not specify a version number for Adam or any other software libraries (e.g., Python, PyTorch/TensorFlow, scikit-learn) required to reproduce the experiments. |
| Experiment Setup | Yes | We set the hyperparameters of the number of SGHMC and optimization steps to J = 30, and K = 50, respectively. The details for all experiments are available in Appendix D. |