Parallel Tempering With a Variational Reference

Authors: Nikola Surjanovic, Saifuddin Syed, Alexandre Bouchard-Côté, Trevor Campbell

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper concludes with experiments that demonstrate the large empirical gains achieved by our method in a wide range of realistic Bayesian inference scenarios.
Researcher Affiliation Academia Nikola Surjanovic Department of Statistics University of British Columbia nikola.surjanovic@stat.ubc.ca Saifuddin Syed Department of Statistics University of Oxford saifuddin.syed@stats.ox.ac.uk Alexandre Bouchard-Côté Department of Statistics University of British Columbia bouchard@stat.ubc.ca Trevor Campbell Department of Statistics University of British Columbia trevor@stat.ubc.ca
Pseudocode Yes Algorithm 1 Non-reversible parallel tempering (NRPT)
Open Source Code Yes The code for the experiments is made publicly available: Julia code is available at https://github.com/UBC-Stat-ML/Variational PT and Blang code is at https://github.com/UBC-Stat-ML/bl-vpt-nextflow.
Open Datasets Yes We consider various Bayesian inference problems: 11 based on real data, and 4 based on synthetic data (see Table 1 in Appendix F for the details of each).
Dataset Splits No The paper does not provide specific details on training/validation/test splits of the datasets, only general usage for experiments.
Hardware Specification No We also acknowledge use of the ARC Sockeye computing platform from the University of British Columbia. This specifies a platform but lacks specific hardware details like GPU/CPU models.
Software Dependencies No The paper mentions 'Julia code' and 'Blang code' but does not specify version numbers for these or any other software dependencies.
Experiment Setup No Experimental details can be found in Appendix F. The main body of the paper mentions that methods have 'comparable cost per iteration' and 'same total number of chains and iterations' but does not provide specific hyperparameters or system-level training settings.