Practical and Consistent Estimation of f-Divergences

Authors: Paul Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya O. Tolstikhin

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the behavior of our estimator empirically in both synthetic and real-data experiments, and discuss its direct implications for total correlation, entropy, and mutual information estimation.
Researcher Affiliation Collaboration Paul K. Rubenstein Max Planck Institute for Intelligent Systems, Tübingen & Machine Learning Group, University of Cambridge paul.rubenstein@tuebingen.mpg.de Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin Google Research, Brain Team, Zürich {obousquet, josipd, rikel, tolstikhin}@google.com
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes A python notebook to reproduce all experiments is available at https://github.com/google-research/google-research/tree/master/f_divergence_estimation_ram_mc.
Open Datasets Yes We consider models pre-trained on the Celeb A dataset [25]
Dataset Splits No The paper mentions using a "test dataset" but does not specify training, validation, or test split percentages or sample counts for the CelebA dataset within the paper.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments.
Software Dependencies No The paper does not provide specific software versions for ancillary software.
Experiment Setup Yes We choose a setting in which QλZ parametrized by a scalar λ and PZ are both d-variate normal distributions for d ∈ {1, 4, 16}. ... We show the behaviour of RAM-MC with N ∈ {1, 500} and M=128... For the plug-in estimator, the densities ˆq(z) and ˆp(z) were estimated by kernel density estimation with 500 samples from QZ and PZ respectively... The divergence was then estimated via MC-sampling using 128 samples from QZ... RAM-MC is evaluated using N ∈ {20, 21, . . . , 214} and M ∈ {10, 103}.