Laplacian Autoencoders for Learning Stochastic Representations

Authors: Marco Miani, Frederik Warburg, Pablo Moreno-Muñoz, Nicki Skafte, Søren Hauberg

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space. We demonstrate that this results in improved performance across a multitude of downstream tasks.
Researcher Affiliation Academia Technical University of Denmark
Pseudocode No The paper includes a diagram (Figure 4: "Iterative training procedure") outlining the method, but it is not presented as structured pseudocode or an algorithm block formatted like code.
Open Source Code Yes The training code is implemented in Py Torch and available2. https://github.com/Frederik Warburg/Laplace AE
Open Datasets Yes We evaluate OOD performance on the commonly used benchmarks (Nalisnick et al., 2019b), where we use FASHIONMNIST (Xiao et al., 2017) as in-distribution and MNIST (Lecun et al., 1998) as OOD. ... In Tab. 4 we conduct a similar experiment on the CELEBA (Liu et al., 2015) facial dataset...
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See appendix.
Hardware Specification Yes The exact diagonal approximation run out of memory for an 36 × 36 × 3 image on a 11 Gb NVIDIA Ge Force GTX 1080 Ti.
Software Dependencies No The paper mentions "Py Torch" as the implementation framework but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes Appendix A provides more details on the experimental setup. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See appendix.