Distributionally Robust Local Non-parametric Conditional Estimation

Authors: Viet Anh Nguyen, Fan Zhang, Jose Blanchet, Erick Delage, Yinyu Ye

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with synthetic and MNIST datasets show the competitive performance of this new class of estimators.
Researcher Affiliation Academia Viet Anh Nguyen Fan Zhang José Blanchet Stanford University, United States {viet-anh.nguyen, fzh, jose.blanchet}@stanford.edu Erick Delage HEC Montréal, Canada erick.delage@hec.ca Yinyu Ye Stanford University, United States yinyu-ye@stanford.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/nvietanh/DRCME.
Open Datasets Yes In this section we compare the quality of our proposed Distributionally Robust Conditional Mean Estimator (DRCME) to k-nearest neighbour (k-NN), Nadaraya-Watson (N-W), and Nadaraya Epanechnikov (N-E) estimators, together with the robust k-NN approach in [2] (Bert Et Al) using a synthetic and the MNIST datasets.using the MNIST database [23].
Dataset Splits Yes The hyperparameters of all the estimators, whose range and selection are given in Appendix A, are chosen by leave-one-out cross validation. In each experiment, the hyper-parameters of all four methods were chosen based on a leave-one-out cross validation process.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments.
Software Dependencies No The paper mentions "commercial optimization solvers such as MOSEK [27]" and "Python Optimal Transport toolbox [12]" but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes The hyperparameters of all the estimators, whose range and selection are given in Appendix A, are chosen by leave-one-out cross validation. Table 1 presents the median choice of hyper parameters for each estimator.