Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport

Authors: Zifan Wang, Yi Shen, Michael Zavlanos, Karl H. Johansson

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide empirical results that demonstrate that our method offers improved robustness to outliers and is computationally less demanding for regression and classification tasks.
Researcher Affiliation Academia Zifan Wang KTH Royal Institute of Technology zifanw@kth.se Yi Shen Duke University yi.shen478@duke.edu Michael M. Zavlanos Duke University michael.zavlanos@duke.edu Karl H. Johansson KTH Royal Institute of Technology kallej@kth.se
Pseudocode Yes Algorithm 1: Distributionally robust optimization with outliers
Open Source Code No Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: The main contribution of this paper is theoretical.
Open Datasets No The paper describes synthetic data generation processes (e.g., 'We generate a clean data distribution Pn with n samples, which is uniform over {Xi, θ Xi}n i=1, where X1, . . . , Xn are i.i.d. from X N(0, Id).'). While it references external works for data distribution types, it does not provide concrete access information (links, DOIs, repositories, or direct citations for publicly available datasets used in their experiments).
Dataset Splits No The paper does not explicitly mention or detail a validation dataset split. It discusses empirical risk minimization and evaluation of excess risk, but without specifying a dedicated validation set.
Hardware Specification Yes All experiments were conducted on an Intel Core i7-1185G7 CPU (3.00GHz) using Python 3.8.
Software Dependencies Yes All experiments were conducted on an Intel Core i7-1185G7 CPU (3.00GHz) using Python 3.8. ... We use the stochastic sub-gradient method in Algorithm 1 to solve the Lagrangian penalty problem and the GUROBI [33] solver to solve all other DRO problems we use as benchmarks.
Experiment Setup Yes We set the parameters as follows: λ = 10, β = 6, λ2 = 5. All the results are averaged over 10 independent runs. We fix ε = 0.1 and C = 8.