Amortized Population Gibbs Samplers with Neural Sufficient Statistics

Authors: Hao Wu, Heiko Zimmermann, Eli Sennesh, Tuan Anh Le, Jan-Willem Van De Meent

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate APG samplers on three different tasks.
Researcher Affiliation Academia 1Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA 2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
Pseudocode Yes Algorithm 1 Sequential Monte Carlo sampler... Algorithm 2 Amortized Population Gibbs Sampler
Open Source Code No The paper does not provide an explicit statement or link to publicly available source code for the methodology described.
Open Datasets Yes In the bouncing MNIST model, our data is a corpus of video frames that contain multiple moving MNIST digits.
Dataset Splits No The paper specifies training and testing corpora, but it does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper mentions 'GPU memory' but does not provide specific details on the hardware used, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., specific Python libraries or frameworks).
Experiment Setup Yes We train with K = 5 sweeps, L = 10 particles, 20 instances per batch, learning rate 2.5 10 4, and 2 105 gradient steps. ... We train our model with K = 8 sweeps, L = 10 particles, 20 instances per batch, learning rate 10 4, and 3 105 gradient steps. ... with K = 5 sweeps, L = 10 particles, 5 instances per batch, learning rate 10 4, and 1.2 107 gradient steps.