Accelerating the diffusion-based ensemble sampling by non-reversible dynamics
Authors: Futoshi Futami, Issei Sato, Masashi Sugiyama
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments show that we can control the discretization error by tuning the interaction appropriately. Experiments on standard Bayesian models support our theoretical findings and show the superior performance of our method compared to SGLD and other ensemble methods. |
| Researcher Affiliation | Collaboration | *The author is now with NTT. 1The Univiersity of Tokyo, Tokyo, Japan 2RIKEN, Tokyo, Japan. |
| Pseudocode | No | The paper describes algorithms using mathematical equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about the release of source code for the methodology described. |
| Open Datasets | Yes | Bayesian logistic regression experiment: Following Liu & Wang (2016), we test on BLR using Covertype dataset (Dua & Graff, 2017) and the result is shown in Fig.2. Latent Dirichlet allocation experiment: We test on LDA model using the ICML dataset (Ding et al., 2014) following the same setting as Patterson & Teh (2013). Bayesian neural net regression: We test on the BNN regression task using Kin8nm dataset of UCI (Dua & Graff, 2017), following the same setting as Liu & Wang (2016). Bayesian neural net classification: We test on the BNN classification task using MNIST (Le Cun & Cortes, 2010) dataset. |
| Dataset Splits | No | The paper mentions using specific datasets for experiments, but it does not explicitly state the training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits with specific details) that would allow reproduction of the data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | Detailed experimental settings are shown in Appendix M. From the theoretical analysis, we confirmed that there is a trade-off between discretization error and the convergence speed. Thus, it is natural to consider that if we tune the interaction α and J appropriately, we can improve the convergence speed while regulating the discretization error. |