Domain-Shared Group-Sparse Dictionary Learning for Unsupervised Domain Adaptation
Authors: Baoyao Yang, Andy Ma, Pong Yuen
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on cross-domain face and object recognition show that the proposed method outperforms eight state-of-the-art unsupervised domain adaptation algorithms. (Abstract) |
| Researcher Affiliation | Academia | 1Department of Computer Science, Hong Kong Baptist University, Hong Kong 2School of Data and Computer Science, Sun Yat-sen University, China |
| Pseudocode | No | Not found. The paper describes algorithms verbally and mathematically but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | Not found. The paper does not explicitly state that source code is available or provide a link to a repository. |
| Open Datasets | Yes | CMU-PIE dataset (Sim, Baker, and Bsat 2002) is used for experiments of face recognition across blur and illumination variations. |
| Dataset Splits | No | Not found. The paper describes source and target domains and test performance but does not specify separate validation splits or how they are used for hyperparameter tuning. |
| Hardware Specification | No | Not found. The paper does not provide any specific hardware details such as GPU/CPU models or other computing specifications used for running experiments. |
| Software Dependencies | No | Not found. The paper mentions using 'De CAF6 features' but does not specify any software names with version numbers for implementation or dependencies. |
| Experiment Setup | Yes | For learning the domain-shared group-sparse dictionary, the sizes of each sub-dictionary are 5 and 10 for face and object recognition, respectively. We set the number of remaining bases as 2 for experiments. The sparsity coefficient λ is set as 0.1, which follows the experimental settings in TSC (Long et al. 2013a). Hyper-parameter experiments are done to analyze the parameter sensitivity of parameters η, δ, μ and β in the optimization function (Equation (6)). For learning target classifier, the parameter γ is set as 1 to balance each term in Equation (21). (Section 4.1) |