Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent
Authors: Trevor Campbell, Tamara Broderick
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper concludes with validation of GIGA on both synthetic and real datasets, demonstrating that it reduces posterior approximation error by orders of magnitude compared with previous coreset constructions. |
| Researcher Affiliation | Academia | 1Computer Science and Artiļ¬cial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States. |
| Pseudocode | Yes | Algorithm 1 GIGA: Greedy Iterative Geodesic Ascent |
| Open Source Code | Yes | Code for these experiments is available at https://github.com/trevorcampbell/bayesian-coresets. |
| Open Datasets | No | The paper mentions datasets like 'Phishing', 'DS1', 'Bike Trips', and 'Airport Delays' and refers to Appendix D for references, but Appendix D only lists paper citations, not direct links or explicit statements of public availability for the datasets themselves. Synthetic datasets are generated. |
| Dataset Splits | No | The paper mentions 'posterior sampling steps' and using 'coreset' vs 'full dataset' but does not specify explicit training, validation, and test dataset splits with percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models or processor types used for running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | For posterior inference, we used Hamiltonian Monte Carlo (Neal, 2011) with 15 leapfrog steps per sample. We simulated a total of 6,000 steps, with 1,000 warmup steps for step size adaptation with a target acceptance rate of 0.8, and 5,000 posterior sampling steps. |