Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Authors: Difan Zou, Pan Xu, Quanquan Gu

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.
Researcher Affiliation Academia 1Department of Computer Science, University of California, Los Angeles, CA 90095, USA. Correspondence to: Quanquan Gu <qgu@cs.ucla.edu>.
Pseudocode Yes Algorithm 1 Stochastic Variance-Reduced Hamiltonian Monte Carlo (SVR-HMC)
Open Source Code No The paper does not provide a link to open-source code or state that the code for the described methodology is publicly available.
Open Datasets Yes We use four binary classification datasets from Libsvm (Chang & Lin, 2011) and UCI machine learning repository (Lichman, 2013), which are summarized in Table 3.
Dataset Splits No Note that pima and mushroom do not have test data in their original version, and we split them into 50% for training and 50% for test. The paper does not specify a validation split for any of the datasets used.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory specifications).
Software Dependencies No The paper mentions using 'Libsvm' but does not specify software dependencies with version numbers for reproducibility.
Experiment Setup Yes Set u = 1/L, γ = 2, x0 = 0, v0 = 0 and = e O(1/ 1/( 1/3n2/3))... let m = n and = O /( 1d 1/2) 2/3/( 1/3d1/3n2/3)... we report the sample path average and discard the first 50 iterations as burn-in. It is worth noting that we observe similar convergence comparison of different algorithms for larger burn-in period (= 104). We run each algorithm 20 times and report the averaged results for comparison. In our experiment, we set σ2a = 1 and λ = 1, and conduct the normalization of the original data.