Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Stein Self-Repulsive Dynamics: Benefits From Past Samples

Authors: Mao Ye, Tongzheng Ren, Qiang Liu

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive empirical studies of our new algorithm, showing that our method yields much higher sample ef๏ฌciency and better uncertainty estimation than vanilla Langevin dynamics.
Researcher Affiliation Academia Tongzheng Ren * UT Austin EMAIL Qiang Liu UT Austin EMAIL
Pseudocode No The paper describes algorithms using mathematical equations and textual explanations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is available at https://github. com/lushleaf/Stein-Repulsive-Dynamics.
Open Datasets Yes We test the performance of SRLD on sampling the posterior of Bayesian Neural Network on the UCI datasets [Dua and Graff, 2017].
Dataset Splits No All of the datasets are randomly partitioned into 90% for training and 10% for testing.
Hardware Specification No The paper does not provide specific hardware details such as CPU or GPU models used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or framework versions).
Experiment Setup Yes We assume the output is normal distributed, with a two-layer neural network with 50 hidden units and tanh activation to predict the mean of outputs. All of the datasets are randomly partitioned into 90% for training and 10% for testing. The results are averaged over 20 random trials. We refer readers to Appendix C for hyper-parameter tuning and other experiment details.