On Divergence Measures for Bayesian Pseudocoresets
Authors: Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, Juho Lee
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results demonstrate that the pseudocoresets constructed from these methods reflect the true posterior even in high-dimensional Bayesian inference problems. |
| Researcher Affiliation | Collaboration | KAIST1, Stanford University2, NAVER AI Lab3, AITRICS4 {balhaekim, jungwon.choi, lsnfamily02}@kaist.ac.kr, yoonho@stanford.edu, jungwoo.ha@navercorp.com, juholee@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1 Bayesian Pseudocoresets with Forward KL |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We use the CIFAR10 dataset [15] to generate Bayesian pseudocoresets, and evaluate on the test split of CIFAR10 in addition to the CIFAR10-C dataset [12] |
| Dataset Splits | No | The paper mentions evaluating on the "test split" of CIFAR10 but does not explicitly detail the training/validation splits, their percentages, or refer to a specific citation for how these splits were obtained (e.g., "standard train/validation split from X"). |
| Hardware Specification | Yes | We use 32 cores of Intel Xeon CPU Gold 5120 and 4 Tesla V100s. |
| Software Dependencies | No | The paper does not provide specific version numbers for any ancillary software dependencies (e.g., Python, PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | As our results were not sensitive to the choice of hyperparameters, we used a single set of hyperparameters that performed best in initial experiments. Please refer to Appendix B for detailed evaluation settings including hyperparameters. |