Multivariate f-divergence Estimation With Confidence

Authors: Kevin Moon, Alfred Hero

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate our theoretical results and, as an illustration, use them to empirically bound the best achievable classification error. We first apply the weighted ensemble estimator of divergence to simulated data to verify the central limit theorem. We then use the estimator to obtain confidence intervals on the error exponents of the Bayes probability of error for the Iris data set from the UCI machine learning repository [33, 34].
Researcher Affiliation Academia Kevin R. Moon Department of EECS University of Michigan Ann Arbor, MI krmoon@umich.edu Alfred O. Hero III Department of EECS University of Michigan Ann Arbor, MI hero@eecs.umich.edu
Pseudocode Yes Algorithm 1 Optimally weighted ensemble divergence estimator
Open Source Code No The paper does not provide any links to open-source code or explicitly state that code is available.
Open Datasets Yes We then use the estimator to obtain confidence intervals on the error exponents of the Bayes probability of error for the Iris data set from the UCI machine learning repository [33, 34].
Dataset Splits Yes We compared the bounds to the performance of a quadratic discriminant analysis classifier (QDA) with 5-fold cross validation.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types) used for the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes In practice, we estimate cα(f1||f2) for multiple values of α (e.g. 0.01, 0.02, . . . , 0.99) and choose the minimum. We estimated a bound on the pairwise Bayes error... and used bootstrapping to calculate confidence intervals.