Nonparametric Score Estimators

Authors: Yuhao Zhou, Jiaxin Shi, Jun Zhu

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our estimators on both synthetic and real data. In Sec. 5.1, we consider a challenging grid distribution as described in the experiment of Sutherland et al. (2018) to test the accuracy of nonparametric score estimators in high dimensions and out-of-sample points, In Sec. 5.2 we train Wasserstein autoencoders (WAE) with score estimation and compare the accuracy and the efficiency of different estimators.
Researcher Affiliation Academia Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, Tsinghua-Bosch ML Center, Tsinghua University. Correspondence to: J. Zhu <dcszj@tsinghua.edu.cn>.
Pseudocode Yes We describe the full algorithm in Example C.4 (appendix C.4.3).
Open Source Code Yes Code is available at https://github.com/miskcoo/kscore.
Open Datasets Yes We train WAEs on MNIST and Celeb A and repeat each configuration 3 times. The average negative log-likelihoods for MNIST estimated by AIS (Neal, 2001) are reported in Table 2. The results for Celeb A are reported in appendix A.
Dataset Splits No The paper mentions using MNIST and Celeb A datasets but does not explicitly detail the training, validation, and test splits used for reproducibility.
Hardware Specification Yes All models are timed on Ge Force GTX TITAN X GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes We train WAEs on MNIST and Celeb A and repeat each configuration 3 times. The average negative log-likelihoods for MNIST estimated by AIS (Neal, 2001) are reported in Table 2. KEF-CG for λ = 10 5 on MNIST. We report the result of 32 runs in Fig. 1.