Stein Self-Repulsive Dynamics: Benefits From Past Samples
Authors: Mao Ye, Tongzheng Ren, Qiang Liu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive empirical studies of our new algorithm, showing that our method yields much higher sample efficiency and better uncertainty estimation than vanilla Langevin dynamics. |
| Researcher Affiliation | Academia | Tongzheng Ren * UT Austin my21@cs.utexas.edu Qiang Liu UT Austin lqiang@cs.utexas.edu |
| Pseudocode | No | The paper describes algorithms using mathematical equations and textual explanations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is available at https://github. com/lushleaf/Stein-Repulsive-Dynamics. |
| Open Datasets | Yes | We test the performance of SRLD on sampling the posterior of Bayesian Neural Network on the UCI datasets [Dua and Graff, 2017]. |
| Dataset Splits | No | All of the datasets are randomly partitioned into 90% for training and 10% for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU or GPU models used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or framework versions). |
| Experiment Setup | Yes | We assume the output is normal distributed, with a two-layer neural network with 50 hidden units and tanh activation to predict the mean of outputs. All of the datasets are randomly partitioned into 90% for training and 10% for testing. The results are averaged over 20 random trials. We refer readers to Appendix C for hyper-parameter tuning and other experiment details. |