Principled learning method for Wasserstein distributionally robust optimization with local perturbations

Authors: Yongchan Kwon, Wonyoung Kim, Joong-Ho Won, Myunghee Cho Paik

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate robustness of the proposed method using image classification datasets. Our results show that the proposed method achieves significantly higher accuracy than baseline models on contaminated datasets.
Researcher Affiliation Academia 1Department of Biomedical Data Science, Stanford University 2Department of Statistics, Seoul National University.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Tensorflow (Abadi et al., 2016)-based scripts are available at https://github.com/ ykwon0407/wdro_local_perturbation.
Open Datasets Yes We use the two image classification datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, 2009).
Dataset Splits No For the training, we randomly select 2500, 5000, 25000, or 50000 images from the original datasets, keeping the number of images per class equal. For the testing, we use the original test datasets. This describes training and test splits but no distinct validation split for hyperparameter tuning, etc.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions 'Tensorflow (Abadi et al., 2016)-based scripts', but does not specify a version number for TensorFlow or any other software dependencies.
Experiment Setup No The paper states, 'Further implementation details are available in the Supplementary Material and Tensorflow (Abadi et al., 2016)-based scripts are available at https://github.com/ykwon0407/wdro_local_perturbation.' This indicates that specific experimental setup details, such as hyperparameters, are not provided in the main text.