Large-Scale Methods for Distributionally Robust Optimization

Authors: Daniel Levy, Yair Carmon, John C. Duchi, Aaron Sidford

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on MNIST and Image Net confirm the theoretical scaling of our algorithms, which are 9 36 times more efficient than full-batch methods. and Section 5 presents experiments where we use DRO to train linear models for digit classification (on a mixture between MNIST [44] and typed digits [19]), and Image Net [60].
Researcher Affiliation Academia Daniel Levy , Yair Carmon , John C. Duchi and Aaron Sidford Stanford University {danilevy,jduchi,sidford}@stanford.edu ycarmon@cs.tau.ac.il
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available on Git Hub at https://github.com/daniellevy/fast-dro/.
Open Datasets Yes Our digit recognition experiment reproduces [22, Section 3.2], where the training data includes the 60K MNIST training images mixed with 600 images of typed digits from [19], while our Image Net experiment uses the ILSVRC-2012 1000-way classification task. and MNIST [44] and Image Net [60].
Dataset Splits No The paper mentions training data and refers to datasets like MNIST and ImageNet, and discusses a 'test' set, but it does not explicitly provide details about a validation dataset split (percentages, sample counts, or specific methods for creating it) required for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using 'Py Torch [56]', but it does not provide specific version numbers for PyTorch or any other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes Appendix F.2 details our hyper-parameter settings and their tuning procedures.