Outlier-Robust Wasserstein DRO

Authors: Sloan Nietert, Ziv Goldfeld, Soroosh Shafiee

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experiments validating our theory on standard regression and classification tasks.
Researcher Affiliation Academia Sloan Nietert Cornell University nietert@cs.cornell.edu Ziv Goldfeld Cornell University goldfeld@cornell.edu Soroosh Shafiee Cornell University shafiee@cornell.edu
Pseudocode No The paper describes algorithms but does not provide them in pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/sbnietert/ outlier-robust-WDRO.
Open Datasets Yes Finally, we turn to a classification task with image data. We train a robust linear classifier to distinguish between the MNIST [14] digits 3 and 8 when 10% of training labels are flipped uniformly at random
Dataset Splits No The paper does not explicitly state training/validation/test splits with percentages or sample counts. It mentions 'standard regression and classification tasks' and refers to the MNIST dataset, which has well-known splits, but does not explicitly state them within the paper text.
Hardware Specification Yes The experiments below were run in 80 minutes on an M1 Mac Book Air with 16GB RAM. The additional experiments were performed on an M1 Macbook Air with 16GB RAM in roughly 30 minutes each.
Software Dependencies Yes Implementations were performed in MATLAB using the YALMIP toolbox [33] and the Gurobi and Se Du Mi solvers [23, 50].
Experiment Setup No The paper mentions fixing parameters like d=10, C=8, ε=0.1, ρ=0.1, and ˆε for experiments. However, it does not provide comprehensive details on hyperparameters such as learning rate, batch size, or optimizer settings, nor model initialization or training schedules, which are typically found in an 'Experimental Setup' section.