Langevin Quasi-Monte Carlo

Authors: Sifan Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theoretical analysis is supported by compelling numerical experiments, which demonstrate the effectiveness of our approach.
Researcher Affiliation Academia Sifan Liu Department of Statistics Stanford University Stanford, CA 94305 sfliu@stanford.edu
Pseudocode Yes Algorithm 1 Langevin quasi-Monte Carlo (LQMC)
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the methodology described.
Open Datasets Yes To investigate the performance of LQMC in a posterior prediction setting, we conducted experiments similar to those presented in Dubey et al. (2016) using three UCI datasets. Each dataset was split into a training set (70%), a validation set (10%), and a test set (20%).
Dataset Splits Yes Each dataset was split into a training set (70%), a validation set (10%), and a test set (20%).
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or specific tools).
Experiment Setup Yes The step size h is fixed to 0.001. ... at each iteration, we estimate the gradient using a random subset of 10 observations. ... We will compare the performance of the LQMC algorithm using three different step sizes: a constant step size of 10 4, a constant step size of 10 2, and decreasing step sizes with hk = c0(c1 + k) 1/3. ... Each iteration computes the stochastic gradient using 32 data points sampled at random.