Aggregating From Multiple Target-Shifted Sources
Authors: Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles X Ling, Boyu Wang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed method through extensive experiments. The empirical results significantly outperform the baselines. |
| Researcher Affiliation | Academia | 1Universit e Laval 2Guangdong University of Technology 3Western University 4Canada CIFAR AI Chair, Mila 5Vector Institute. |
| Pseudocode | Yes | Algorithm 1 WADN (unsupervised scenario, one epoch) |
| Open Source Code | No | The paper does not include a statement about open-sourcing its code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the performance on three different datasets: (1) Amazon Review. (Blitzer et al., 2007)... (2) Digits. It consists four digits recognition datasets including MNIST, USPS (Hull, 1994), SVHN (Netzer et al., 2011) and Synth (Ganin et al., 2016)... (3) Office Home Dataset (Venkateswara et al., 2017). |
| Dataset Splits | Yes | We randomly sample only 10% labeled samples (w.r.t. target dataset in unsupervised DA) as training set and the rest 90% samples as the unseen target test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify any software names with version numbers required for replication. |
| Experiment Setup | Yes | The implicit and explicit trade-off coefficient is set as ϵ = 0.5. We use a similar hyper-parameter selection strategy as in DANN (Ganin et al., 2016). All reported results are averaged from five runs. The detailed experimental settings are illustrated in Appendix. |