Model Selection for Bayesian Autoencoders

Authors: Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, Maurizio Filippone

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results, outperforming multiple competitive baselines.
Researcher Affiliation Collaboration Ba-Hien Tran EURECOM (France) Simone Rossi EURECOM (France) Dimitrios Milios EURECOM (France) Pietro Michiardi EURECOM (France) Edwin V. Bonilla CSIRO s Data61 The Australian National University The University of Sydney (Australia) Maurizio Filippone EURECOM (France)
Pseudocode No The paper describes algorithms and methods but does not include a structured pseudocode block or algorithm section.
Open Source Code No The paper does not contain an explicit statement about releasing code or a link to a source code repository for the described methodology.
Open Datasets Yes MNIST [29]: We use 100 examples of the 0 digits to tune the prior. The training set consists of examples of 1-9 digits, whereas the test set contains 10 000 instances of all digits. FREY-YALE [12]: We use 1 956 examples of FREY faces to optimize the prior. The training set and test set are comprised of YALE faces. CELEBA dataset [30].
Dataset Splits Yes MNIST [29]: We use 100 examples of the 0 digits to tune the prior. The training set consists of examples of 1-9 digits, whereas the test set contains 10 000 instances of all digits. ... For our proposal, we use 1 000 examples that are randomly chosen from the original training set to learn the prior distribution. The test set consists of about 20 000 images.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running its experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes Unless otherwise stated, all models including ours share the same latent dimensionality (K = 50). ... For our proposal, we use 1 000 examples that are randomly chosen from the original training set to learn the prior distribution. ... The test set consists of about 20 000 images.