Optimistic Distributionally Robust Optimization for Nonparametric Likelihood Approximation

Authors: Viet Anh Nguyen, Soroosh Shafieezadeh Abadeh, Man-Chung Yue, Daniel Kuhn, Wolfram Wiesemann

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use our optimistic likelihood approximation in the ELBO problem (1) for posterior inference. We prove that the resulting posterior inference problems under the KL divergence and the Wasserstein distance enjoy strong theoretical guarantees, and we illustrate their promising empirical performance in numerical experiments. and 6 Numerical Experiments
Researcher Affiliation Academia Viet Anh Nguyen Soroosh Shafieezadeh-Abadeh École Polytechnique Fédérale de Lausanne, Switzerland {viet-anh.nguyen, soroosh.shafiee}@epfl.ch Man-Chung Yue The Hong Kong Polytechnic University, Hong Kong manchung.yue@polyu.edu.hk Daniel Kuhn École Polytechnique Fédérale de Lausanne, Switzerland daniel.kuhn@epfl.ch Wolfram Wiesemann Imperial College Business School, United Kingdom ww@imperial.ac.uk
Pseudocode No The paper describes methods mathematically but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The source code, including our algorithm and all tests implemented in Python, are available from https://github. com/sorooshafiee/Nonparam_Likelihood.
Open Datasets Yes Section 6.2 benchmarks the performance of the different likelihood approximations in a probabilistic classification task on standard UCI datasets.
Dataset Splits Yes In our experiments involving the Wasserstein ambiguity set, we randomly select 75% of the available data as training set and the remaining 25% as test set. We then use the training samples to tune the radii εi {a m10b : a {1, . . . , 9}, b { 3, 2, 1}}, i = 1, 2, of the Wasserstein balls by a stratified 5-fold cross validation.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, or cloud resources) used for running the experiments.
Software Dependencies No The paper mentions that code is implemented in Python ("all tests implemented in Python") but does not provide specific version numbers for Python or any other software libraries or dependencies used.
Experiment Setup Yes We conduct the following experiment for different training set sizes Ni {1, 2, 4, 8, 10} and different ambiguity set radii ε. For each parameter setting, our experiment consists of 100 repetitions. and M = 20 trials and success probability θtrue = 0.6. and tune the radii εi {a m10b : a {1, . . . , 9}, b { 3, 2, 1}}, i = 1, 2, of the Wasserstein balls by a stratified 5-fold cross validation.