Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Authors: Ananya Uppal, Shashank Singh, Barnabas Poczos

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of estimating a nonparametric probability density under a large family of losses called Besov IPMs... For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data interact to determine the minimax optimal convergence rate. We also show that linear distribution estimates, such as the empirical distribution or kernel density estimator, often fail to converge at the optimal rate. Our bounds generalize, unify, or improve several recent and classical results. Moreover, IPMs can be used to formalize a statistical model of generative adversarial networks (GANs). Thus, we show how our results imply bounds on the statistical error of a GAN, showing, for example, that GANs can strictly outperform the best linear estimator.
Researcher Affiliation Collaboration Ananya Uppal Department of Mathematical Sciences Carnegie Mellon University auppal@andrew.cmu.edu Shashank Singh Barnabás Póczos Machine Learning Department Carnegie Mellon University {sss1,bapoczos}@cs.cmu.edu 1Now at Google.
Pseudocode No The paper describes algorithmic components like the wavelet-thresholding estimator mathematically but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information for open-source code, such as a repository link or an explicit statement of code release.
Open Datasets No The paper is theoretical and discusses samples as a conceptual basis for its analysis, but it does not specify any actual datasets used in experiments or provide access information for them.
Dataset Splits No The paper is theoretical and does not describe any experimental setup involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and focuses on mathematical proofs and convergence rates, so it does not discuss hardware specifications for running experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations.