Learning Rate Free Sampling in Constrained Domains

Authors: Louis Sharrock, Lester Mackey, Christopher Nemeth

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance of our algorithms on a range of numerical examples, including sampling from targets on the simplex, sampling with fairness constraints, and constrained sampling problems in postselection inference. Our results indicate that our algorithms achieve competitive performance with existing constrained sampling methods, without the need to tune any hyperparameters.
Researcher Affiliation Collaboration Louis Sharrock Department of Mathematics and Statistics Lancaster University, UK l.sharrock@lancaster.ac.uk Lester Mackey Microsoft Research New England Cambridge, MA lmackey@microsoft.com Christopher Nemeth Department of Mathematics and Statistics Lancaster University, UK c.nemeth@lancaster.ac.uk
Pseudocode Yes Algorithm 1 MSVGD; Algorithm 2 Coin MSVGD; Algorithm 3 Coin MIED; Algorithm 4 Mirrored LAWGD; Algorithm 5 Mirrored KSDD; Algorithm 6 Coin MLAWGD; Algorithm 7 Coin MKSDD; Algorithm 8 Adaptive Coin MSVGD
Open Source Code Yes Code to reproduce all of the numerical results can be found at https://github.com/louissharrock/constrained-coin-sampling.
Open Datasets Yes We use the Adult Income dataset [50]. ... We next consider a post-selection inference problem involving the HIV-1 drug resistance dataset studied in [8, 75].
Dataset Splits No The paper mentions a "train-test split of 80% / 20%" but does not explicitly provide information on a validation split.
Hardware Specification Yes We perform all experiments using a Mac Book Pro 16" (2021) laptop with Apple M1 Pro chip and 16GB of RAM.
Software Dependencies No The paper states: "We implement all methods using Python 3, Py Torch, and Tensor Flow." However, it only provides a version number for Python, not for PyTorch or TensorFlow, which are crucial software dependencies.
Experiment Setup Yes We employ the IMQ kernel and the entropic mirror map [7]; and use N = 50 particles, T = 500 iterations. ... We run each algorithm for T = 1000 iterations. For Coin MSVGD, MSVGD, and SVMD, we use N = 50 particles, and generate Ntotal samples by aggregating the particles from Ntotal/N independent runs. ... We run all algorithms for T = 2000 iterations, and using N = 50 particles.