Optimal Testing for Properties of Distributions

Authors: Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution p, and a known distribution q, are p and q close in χ2-distance, or far in total variation distance? The optimality of our testers is established by providing matching lower bounds, up to constant factors. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave, monotone hazard rate distributions.
Researcher Affiliation Academia Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath EECS, MIT {jayadev, costis, g}@mit.edu
Pseudocode Yes Algorithm 1 Chi-squared testing algorithm 1: Input: "; an explicit distribution q; (Poisson) m samples from a distribution p, where Ni denotes the number of occurrences of the ith domain element. 2: A {i : qi "2/50n} 3: Z i2A (Ni mqi)2 Ni mqi 4: if Z m"2/10 return close 5: else return far
Open Source Code No The paper does not provide any statement about releasing source code for the methodology described, nor does it include links to a code repository.
Open Datasets No The paper is theoretical and focuses on sample complexity. It does not use or refer to any specific publicly available datasets for experimental training, nor does it provide any links or citations to such datasets.
Dataset Splits No The paper does not mention training, validation, or test dataset splits. The term 'validation' in the paper refers to a section heading, not a data split.
Hardware Specification No The paper does not provide any details about the specific hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers (e.g., Python, PyTorch, or specific solvers).
Experiment Setup No The paper is theoretical and does not provide details about an experimental setup, such as hyperparameters, training schedules, or specific system-level configuration settings.