On Testing of Samplers
Authors: Kuldeep S Meel, Yash Pralhad Pote, Sourav Chakraborty
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers. To demonstrate the practical efficiency of Barbarik2, we developed a prototype implementation in Python and performed an experimental evaluation with several samplers. |
| Researcher Affiliation | Academia | 1School of Computing, National University of Singapore 2Indian Statistical Institute, Kolkata |
| Pseudocode | Yes | Algorithm 1 Barbarik2(G, A, ε, η, δ, ϕ, S, wt) Algorithm 2 Barbarik2Kernel(ϕ, σ1, σ2) Algorithm 3 Bias(ˆσ, Γ, S) |
| Open Source Code | Yes | The accompanying tool, available open source, can be found at https://github.com/meelgroup/barbarik. |
| Open Datasets | Yes | We conducted our experiments on 72 publicly available benchmarks, which have been employed in the evaluation of samplers proposed in the past [13, 21]. |
| Dataset Splits | No | The paper uses publicly available benchmarks to evaluate samplers but does not describe any specific training, validation, or test dataset splits for these benchmarks. |
| Hardware Specification | Yes | All experiments were conducted on a high performance computing cluster with 600 E5-2690 v3 @2.60GHz CPU cores. |
| Software Dependencies | No | The paper mentions 'Python' for implementation and uses 'WAPS' as an ideal sampler, but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper specifies test parameters like tolerance, intolerance, and confidence values, and mentions instantiating Barbarik2Kernel with m=12 and k=2m-1. However, it does not provide explicit hyperparameters related to model training or system-level training settings as typically found in machine learning experimental setups. |