Deep Asymmetric Transfer Network for Unbalanced Domain Adaptation
Authors: Daixin Wang, Peng Cui, Wenwu Zhu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on two real-world datasets demonstrate that DATN attains a substantial gain over state-of-the-art methods. |
| Researcher Affiliation | Academia | Daixin Wang, Peng Cui, Wenwu Zhu Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China dxwang0826@gmail.com, cuip@mail.tsinghua.edu.cn, wwzhu@tsinghua.edu.cn |
| Pseudocode | Yes | The full algorithm is shown in Alg. 1. |
| Open Source Code | No | The paper does not provide any statement or link regarding the release of open-source code for the described methodology. |
| Open Datasets | Yes | In our experiments, we use two real-world datasets, i.e. NUS-WIDE and AMAZON REVIEWS. NUS-WIDE (Chua et al. 2009) is a public web image dataset... AMAZON REVIEWS (Prettenhofer and Stein 2010) is a cross-language dataset... |
| Dataset Splits | Yes | The final values of all the parameters are determined by using 5-fold cross-validation on the training set. |
| Hardware Specification | No | The paper states 'Our approach is implemented in Tensorflow' but does not provide any specific details about the hardware (e.g., GPU, CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Our approach is implemented in Tensorflow' but does not specify a version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | The values of α and β are selected from {0, 0.5, 1, 2, 5, 10}. The regularization parameters of λ and λ are set as 0.1 and 0.0001. Throughout the experiments, the learning rate is set as 0.0001, the decay is set as 0.8 and the momentum is set as 0.8. Table 3: Number of neurons of each layer of DATN. |