Constrained Sampling with Primal-Dual Langevin Monte Carlo

Authors: Luiz Chamon, Mohammad Reza Karimi Jaghargh, Anna Korba

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the relevance and effectiveness of PD-LMC in several applications.
Researcher Affiliation Academia Luiz F. O. Chamon University of Stuttgart luiz.chamon@simtech.uni-stuttgart.de Mohammad Reza Karimi ETH Zürich mkarimi@inf.ethz.ch Anna Korba CREST, ENSAE, IP Paris anna.korba@ensae.fr
Pseudocode Yes Algorithm 1 Primal-dual LMC
Open Source Code Yes Code for these examples is publicly available at https://www.github.com/lfochamon/pdlmc.
Open Datasets Yes The N = 32561 data points in the training set are composed of d = 62 socio-economical features (x Rd, including the intercept) and the goal is to predict whether the individual makes more than US$ 50000 per year (y {0, 1}).
Dataset Splits No The N = 32561 data points in the training set are composed of d = 62 socio-economical features... We find that, while the probability of positive outputs is 19.1% across the whole test set...
Hardware Specification No The paper does not specify any particular hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper mentions 'Python code' will be made public (in the NeurIPS checklist), but it does not specify any software dependencies with version numbers within the main paper or its appendices.
Experiment Setup Yes In these experiments, we start all chains at zero (unless stated otherwise) and use different step-sizes for each of the updates in steps 3 5 from Algorithm 1. We refer to them as ηx, ηλ, and ην. In contrast, we do not use diminishing step-sizes.