Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

Authors: Oren Mangoubi, Nisheeth Vishnoi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The problem of sampling from a log-concave distribution is as follows: For a convex body K Rd and a convex function f : K æ R, output a sample from the distribution fi( ) à e f( ). This is a basic problem in computer science, statistics, and machine learning, with applications to optimization and integration [1, 29], Bayesian statistics [39], reinforcement learning [6], and differential privacy [32, 20, 2, 24]. Sampling exactly from fiis known to be computationally hard for most interesting cases of K and f [16] and, hence, the goal is to output samples from a distribution that is at a small (specified) distance to fi. For applications such as computing the integral of fi, bounds in the total variation (TV) distance [1] or KL divergence (which implies a TV bound) are sufficient. In applications such as computing the expectation of a Lipschitz function with respect to fi, Wasserstein distance may also be sufficient. In differentially private optimization [32, 20, 2, 19, 24], one requires bounds on the stronger infinity-distance dŒ( , fi) := sup ----log ( ) to guarantee pure differential privacy, and TV, KL, or Wasserstein bounds are insufficient; see [14].Pure differential privacy (DP) is the strongest notion of DP and has been extensively studied (see e.g. the survey [14]). It has advantages over weaker notions of differential privacy. E.g., when privacy of groups of individuals (rather than just single individuals) must be preserved, any mechanism which is (pure) Á-DP (with respect to single individuals), is also kÁ-DP with respect to subsets of k individuals. Motivated by applications to differential privacy, we study the problem of designing efficient algorithms to output samples from a distribution which is Á-close in dŒ to fi. 36th Conference on Neural Information Processing Systems (Neur IPS 2022).
Researcher Affiliation Academia Oren Mangoubi Worcester Polytechnic Institute Nisheeth K. Vishnoi Yale University
Pseudocode Yes Algorithm 1: Interior point TV to infinity-distance converter
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplementary material.
Open Datasets No This paper is theoretical and focuses on algorithm design and proofs. It does not involve empirical studies with datasets for training or evaluation.
Dataset Splits No This paper is theoretical and does not conduct experiments involving dataset splits (training, validation, testing).
Hardware Specification No This paper is theoretical and does not describe experiments that would require specific hardware. No hardware specifications are provided.
Software Dependencies No This paper is theoretical and does not describe an experimental setup with specific software dependencies and version numbers.
Experiment Setup No This paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.