Unsupervised Domain Adaptation via Discriminative Manifold Embedding and Alignment
Authors: You-Wei Luo, Chuan-Xian Ren, Pengfei Ge, Ke-Kun Huang, Yu-Feng Yu5029-5036
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted to investigate the proposal and results of the comparison study manifest the superiority of consistent manifold learning framework. |
| Researcher Affiliation | Academia | 1School of Mathematics, Sun Yat-Sen University, China 2School of Mathematics, Jia Ying University, China 3Department of Statistics and Institute of Intelligent Finance, Guangzhou University, China |
| Pseudocode | No | The paper describes the method using mathematical formulations and text, but no explicit pseudocode or algorithm block is provided. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | Office-Home (Venkateswara et al. 2017)... Image-CLEF-DA... Vis DA-2017 (Peng et al. 2017) |
| Dataset Splits | No | In this section, three popular domain adaptation datasets are selected and the standard evaluation protocols are adopted. and Following the previous protocol (Long et al. 2018), we conduct adaptation task between Caltech (C), Image Net (I) and Pascal (P). This implies standard splits are used, but the paper itself does not explicitly state the percentages or methodology for these splits. |
| Hardware Specification | No | The paper describes optimizer settings and batch sizes but does not provide specific hardware details such as GPU/CPU models or memory specifications used for experiments. |
| Software Dependencies | No | Adam Optimizer (lr = 0.0002, β1 = 0.9, β2 = 0.999) with batch size of 50 is utilized on Office-Home and Image-CLEF-DA datasets; the modified mini-batch SGD (Ganin et al. 2016) (lr = 0.003, momentum = 0.9, weight decay = 5e 4) with batch size of 32 is employed on Vis DA-2017 challenge. This text mentions optimizers but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Two layers Riemmanian manifold learning scheme is carried out in all experiments (i.e., l = 2), where the first layer (1024d) is activated by Leaky Re LU (α = 0.2) and the second layer (512d) by Tanh. Adam Optimizer (lr = 0.0002, β1 = 0.9, β2 = 0.999) with batch size of 50 is utilized on Office-Home and Image-CLEF-DA datasets; the modified mini-batch SGD (Ganin et al. 2016) (lr = 0.003, momentum = 0.9, weight decay = 5e 4) with batch size of 32 is employed on Vis DA-2017 challenge. The learning rate of CNN backbone layers is set as 0.1lr. The hyperparameters are determined by try-and-error approach. Specifically, λ1 and λ2 are set as 1e1 and 5e3, respectively. The Top-1 scheme is adopted for the target intra-class loss in Eq. (3). |