Amortized Inference Regularization

Authors: Rui Shu, Hung H. Bui, Shengjia Zhao, Mykel J. Kochenderfer, Stefano Ermon

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on statically binarized MNIST, statically binarized OMNIGLOT, and the Caltech 101 Silhouettes datasets.
Researcher Affiliation Collaboration Rui Shu Stanford University ruishu@stanford.edu Hung H. Bui Deep Mind buih@google.com Shengjia Zhao Stanford University sjzhao@stanford.edu Mykel J. Kochenderfer Stanford University mykel@stanford.edu Stefano Ermon Stanford University ermon@cs.stanford.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We conducted experiments on statically binarized MNIST, statically binarized OMNIGLOT, and the Caltech 101 Silhouettes datasets.
Dataset Splits No The paper mentions using a 'validation set' for hyperparameter tuning in Section 4.2 and Figure 2, and refers to 'Table 7' for details, but does not explicitly provide the specific percentages or sample counts for the training/validation/test dataset splits within the main body of the paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper mentions using 'Adam [19]' as an optimizer but does not specify its version or any other software dependencies with version numbers.
Experiment Setup No The paper mentions that models were 'trained using Adam' and that 'Hyperparameter tuning of DVAE s σ and WNI-VAE s FH is described in Table 7 (in the appendix)', but it does not provide concrete hyperparameter values or full training configurations within the main text.