Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates
Authors: Louis Sharrock, Christopher Nemeth
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other Par VI algorithms with no need to tune a learning rate. |
| Researcher Affiliation | Academia | 1Department of Mathematics, Lancaster University, UK. |
| Pseudocode | Yes | Algorithm 1 Coin Wasserstein Gradient Descent |
| Open Source Code | Yes | Code to reproduce our numerical results can be found at https://github.com/louissharrock/Coin-SVGD. |
| Open Datasets | Yes | We test our algorithm using the Covertype dataset, which consists of 581,012 data points and 54 features. (Gershman et al., 2012) [...] We test the performance of our algorithms on several UCI datasets. (Liu & Wang, 2016; Hernandez-Lobato & Adams, 2015) [...] We test our algorithm on the Movie Lens dataset (Harper & Konstan, 2015)... |
| Dataset Splits | Yes | We randomly partition the data into a training dataset (70%), validation dataset (10%), and testing dataset (20%). |
| Hardware Specification | Yes | We perform all experiments using a Mac Book Pro 16 (2021) laptop with Apple M1 Pro chip and 16GB of RAM. |
| Software Dependencies | No | The paper mentions 'Python 3, Py Torch, Theano, and Jax' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | In all cases, we run both algorithms using N = 20 particles, and for T = 1000 iterations. We initialise the particles according to (θi 0)N i=1 i.i.d. N(0, 0.12). |