LAMDA: Label Matching Deep Domain Adaptation
Authors: Trung Le, Tuan Nguyen, Nhat Ho, Hung Bui, Dinh Phung
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on real-world datasets to compare LAMDA with state-of-the-art baselines. The experimental results on the real-world datasets show that our LAMDA is able to reduce the label mismatch and hence achieving better performances. |
| Researcher Affiliation | Collaboration | 1Department of Data Science and AI, Monash University, Australia 2University of Texas, Austin, USA 3Vin AI Research, Vietnam. |
| Pseudocode | Yes | Algorithm 1 Pseudocode for training LAMDA. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing its source code nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct the experiments to compare our LAMDA against the state-of-the-art baselines on the digit, traffic sign, natural scene, Office-Home, Office-31, and Image CLEF-DA datasets. Digit datasets. MNIST. MNIST-M. SVHN. DIGITS. Traffic sign datasets. SIGNS. GTSRB. Natural scene datasets. CIFAR. STL. Object recognition datasets. Office-Home. Office-31. Image CLEF-DA. |
| Dataset Splits | No | The paper mentions training iterations and batch size but does not provide specific details on how datasets are split into training, validation, and test sets, either by percentages, sample counts, or explicit references to predefined splits. |
| Hardware Specification | Yes | We implement our LAMDA in Python (version 3.5) using Tensorflow (version 1.9.0) (Abadi et al., 2016) and run our experiments on a computer with a CPU named Intel Xeon Processor E5-1660 which has 8 cores at 3.0 GHz and 128 GB of RAM, and a GPU called NVIDIA Ge Force GTX Titan X with 12 GB memory. |
| Software Dependencies | Yes | We implement our LAMDA in Python (version 3.5) using Tensorflow (version 1.9.0) (Abadi et al., 2016) |
| Experiment Setup | Yes | We apply Adam Optimizer (β1 = 0.5,β2 = 0.999) with the learning rate 0.001 digits, traffic sign and natural scene datasets, whereas 0.0001 is the learning rate for object recognition datasets. All experiements was trained for 20000 iterations on Office-31, Office-Home, and Image CLEF-DA and 80000 for the other datasets. The batch size for each dataset is set to 128. We set β = 0, α = 0.5 as described in the ablation study, and γ is searched in {0.1, 0.5}. |