Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts

Authors: Bahar Taskesen, Man-Chung Yue, Jose Blanchet, Daniel Kuhn, Viet Anh Nguyen

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on real data show that the robust strategies may outperform non-robust interpolations of the empirical least squares estimators. and 6. Numerical Experiments and Table 1 shows the average cumulative loss of each aggregated expert obtained by the BOA algorithm for all datasets and for J = {5, 10, 50, 100} across 100 independent runs.
Researcher Affiliation Collaboration Bahar Taskesen 1 Man-Chung Yue 2 Jos e Blanchet 3 Daniel Kuhn 1 Viet Anh Nguyen 3 4 1Risk Analytics and Optimization Chair, Ecole Polytechnique F ed erale de Lausanne 2Department of Applied Mathematics, The Hong Kong Polytechnic University 3Department of Management Science and Engineering, Stanford University 4Vin AI Research, Vietnam.
Pseudocode No The paper describes algorithms in text (e.g., Bernstein Online Aggregation) but does not provide structured pseudocode blocks or algorithm listings.
Open Source Code Yes The corresponding codes are available at https: //github.com/RAO-EPFL/DR-DA.git.
Open Datasets Yes We compare the performance of our model against the above non-robust benchmarks on 5 Kaggle datasets:3 and footnote 3 Descriptions and download links are provided in the appendix.
Dataset Splits No We use all samples from the source domain for training, and we form the target training set by drawing NT =d samples from the target dataset. Later, we randomly sample J = 1000 data points from the remaining target samples to form the sequentially arriving target test samples. No explicit validation split is mentioned.
Hardware Specification Yes All experiments are run on an Intel i7-8700 CPU (3.2 GHz) computer with 16GB RAM.
Software Dependencies Yes The second-order cone and semidefinite programs are modelled in MATLAB via YALMIP (L ofberg, 2004) and solved with MOSEK Ap S (2019).
Experiment Setup Yes We set the regularization parameter of the ridge regression problem to η = 10 6 and the learning rate of the BOA algorithm to υ = 0.5.