Path-Guided Particle-based Sampling

Authors: Mingzhou Fan, Ruida Zhou, Chao Tian, Xiaoning Qian

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of the proposed PGPS methods compared to LD, SVGD (Liu & Wang, 2016), PFG (Dong et al., 2022) baselines. The number of iterations for each method is the same, where the Langevin Adjustment steps in PGPS are counted. The code to reproduce the experimetnal results can be found in our Github repository: https://github.com/Mingzhou Fan97/PGPS. 5.1. Gaussian Mixture Target Distribution 5.2. Bayesian Neural Network Inference
Researcher Affiliation Academia 1Department of Electrical & Computer Engineering, Texas A&M University, College Station, Texas, USA 2Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA 3Department of Computer Science and Engineering, Texas A&M University, College Station, TX, USA 4Computational Science Initiative, Brookhaven National Laboratory, Upton, NY, USA.
Pseudocode Yes Algorithm 1 Adaptive Time Step, Algorithm 2 Langevin Adjustment, Algorithm 3 PGPS
Open Source Code Yes The code to reproduce the experimetnal results can be found in our Github repository: https://github.com/Mingzhou Fan97/PGPS.
Open Datasets Yes We conduct BNN inference for UCI datasets (Dua & Graff, 2017) and learning BNNs on the MNIST dataset (Deng, 2012).
Dataset Splits No The paper mentions using UCI and MNIST datasets but does not provide specific details on training, validation, or test splits (e.g., percentages, sample counts, or explicit citations to specific splits used) in the main text or appendices.
Hardware Specification Yes The experiments are performed on Nvidia Tesla T4 GPU and Intel Xeon 8352Y CPU.
Software Dependencies No The paper mentions the use of 'machine learning packages' and implies Python/PyTorch through context, but does not specify software names with version numbers for reproducibility (e.g., 'Python 3.x', 'PyTorch 1.x').
Experiment Setup Yes To estimate the vector field ϕt for PGPS in both experiments, we use a two-layer perceptron with 64 hidden neurons and Sigmoid activation function. The particle step-sizes ψ is set to be {0.5, 0.1, 0.05, 0.01}, the step size for LD, PFG, SVGD, and PGPS adjustment are all set to be 10^-2. The path hyperparameter α is selected from {0, 0.2, 0.4, 0.6, 0.8, 1} and β is selected from {0.2, 0.4, 0.6, 0.8, 1}.