Variational Bayes under Model Misspecification
Authors: Yixin Wang, David Blei
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct two simulation studies that demonstrate the theoretical results. ... We illustrate the implications of Theorems 1, 2, 3 and 4 with simulation studies. We studied two models, Bayesian GLMM [21] and LDA [6]. To make the models misspecified, we generate datasets from an incorrect model and then perform approximate posterior inference. We evaluate how close the approximate posterior is to the limiting exact posterior δθ , and how well the approximate posterior predictive captures the test sets. ... Figure 2: Dataset size versus closeness to the limiting exact posterior δθ and posterior predictive log likelihood on test data (mean sd). |
| Researcher Affiliation | Academia | Yixin Wang Columbia University David M. Blei Columbia University |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No explicit statement providing concrete access to source code for the methodology described in this paper was found. |
| Open Datasets | No | We simulate data from a negative binomial linear mixed model (LMM)... We simulate N documents from a 15-dimensional LDA. The paper states that data was simulated, but does not provide access information (link, citation, or repository) for a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions simulating data and performing posterior inference and evaluating against test sets, but does not provide specific details on training/validation/test splits, percentages, or sample counts, nor does it cite pre-defined splits for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were mentioned in the paper. |
| Software Dependencies | No | We use two automated inference algorithms in Stan [8]: automatic differentiation variational inference (ADVI) [20] for VB and No-U-Turn sampler (NUTS) [16] for HMC. While software names are mentioned, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | We lay out the detailed simulation setup in Appendix I. |