Robust Monte Carlo Sampling using Riemannian Nosé-Poincaré Hamiltonian Dynamics

Authors: Anirban Roychowdhury, Brian Kulis, Srinivasan Parthasarathy

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show strong performance of our algorithms on synthetic datasets and high-dimensional Poisson factor analysis-based topic modeling scenarios. (Abstract)
Researcher Affiliation Academia Anirban Roychowdhury ROYCHOWDHURY.7@OSU.EDU Ohio State University, Columbus, OH 43210; Brian Kulis BKULIS@BU.EDU Boston University, Boston, MA 02215; Srinivasan Parthasarathy SRINI@CSE.OHIO-STATE.EDU Ohio State University, Columbus, OH 43210
Pseudocode Yes Algorithm 1 Riemann Nosé-Poincaré HMC (Section 3.1.2)
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We use two public datasets for this experiment, the 20-Newsgroups and Reuters Corpus Volume 1 corpora from (Srivastava et al., 2013). (Section 4.3)
Dataset Splits Yes We use the same training/validation/test split as (Gan et al., 2015), where the 20 Newsgroups dataset is split chronologically into 11,314 training and 7,531 test documents, and the Reuters dataset into 794,414 training and 10,000 test documents. (Section 4.3)
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes Learning rates are fixed to 1e-3 and batchsizes to 100 for both algorithms. (Section 4.1) and We used K = 200 latent topics for all algorithms. For SGR-NPHMC we set the learning rates of all three NPHMC chains to 1e-4, and for SG-NHT we use a stable learning rate of 1e-6. Batchsize was set to 100 for both algorithms. (Section 4.3)