Stochastic Fractional Hamiltonian Monte Carlo

Authors: Nanyang Ye, Zhanxing Zhu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the proposed method on both sampling and optimization, we conduct experiments on synthetic examples and mnist classification task. For sampling, we compare our method with FLD, HMC and LD. For training deep neural networks, we compare our method with popular optimization methods-SGD, Adam, RMSprop. The same parameter initialization is used for all methods.
Researcher Affiliation Academia Nanyang Ye1, Zhanxing Zhu 2,3 1 University of Cambridge, Cambridge, United Kingdom 2 Center for Data Science, Peking University, Beijing, China 3 Beijing Institute of Big Data Research (BIBDR)
Pseudocode Yes Algorithm 1 (Stochastic Gradient) Fractional Hamiltonian Monte Carlo
Open Source Code No Our implementation is adapted from https://github.com/hwalsuklee/tensorflow-mnist-VAE. The paper does not explicitly provide a link to the authors' own implementation of the proposed FHMC/SGFHMC methodology.
Open Datasets Yes We used the training set of MNIST dataset consisting of 60000 training images for this task.
Dataset Splits No During training, the dataset is split into training, validation and test dataset. (No specific percentages or counts are given for these splits to be reproducible).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Tensorflow' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The best parameter setting for each method are: SGFHMC(learning rate is 0.03, momentum is 0.9, α is 1.6), SGD(learning rate is 0.003, momentum is 0.2), Adam(learning rate is 0.0001), RMSprop (learning rate is 0.0001).