Stochastic Gradient Monomial Gamma Sampler
Authors: Yizhe Zhang, Changyou Chen, Zhe Gan, Ricardo Henao, Lawrence Carin
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | It is shown that the proposed approach is better at exploring complex multimodal posterior distributions, as demonstrated on multiple applications and in comparison with other stochastic gradient MCMC methods. |
| Researcher Affiliation | Academia | 1Duke University, Durham, NC, 27708. Correspondence to: Yizhe Zhang <yizhe.zhang@duke.edu>. |
| Pseudocode | Yes | The complete update scheme, with Euler integrator, for SGMGT is presented in the SM. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We evaluated the mixing efficiency and accuracy of SGMGT and SGMGT-D using Bayesian logistic regression (BLR) on 6 real-world datasets from the UCI repository (Bache & Lichman, 2013): German credit (G), Australian credit (A), Pima Indian (P), Heart (H), Ripley (R) and Caravan (C). |
| Dataset Splits | Yes | We use 80% of the documents for training and the remaining 20% for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions models and algorithms but does not specify version numbers for any software dependencies or libraries used for implementation (e.g., PyTorch, TensorFlow, scikit-learn versions). |
| Experiment Setup | Yes | We set the minibatch size to 16. Other hyperparameters are provided in the SM. For each experiment, we draw 5000 iterations with 1000 burn-in samples. |