The l2,1-Norm Stacked Robust Autoencoders for Domain Adaptation
Authors: Wenhao Jiang, Hongchang Gao, Fu-lai Chung, Heng Huang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results demonstrate that the proposed method is very effective in multiple cross domain classification datasets which include Amazon review dataset, spam dataset from ECML/PKDD discovery challenge 2006 and 20 newsgroups dataset. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, USA 2Department of Computing, Hong Kong Polytechnic University, Hung Hom, Hong Kong, China |
| Pseudocode | Yes | Algorithm 1 ℓ2,1-norm Robust Autoencoder |
| Open Source Code | No | The paper does not provide any links to its own source code or explicitly state that it has been made publicly available. |
| Open Datasets | Yes | We test and analyze the proposed method on Amazon review dataset 2 (Blitzer, Dredze, and Pereira 2007), ECML/PKDD 2006 spam dataset 3 (Bickel 2008) and 20 newsgroups dataset 4. [Footnote 2: http://www.cs.jhu.edu/~mdredze/datasets/sentiment/, Footnote 3: http://www.ecmlpkdd2006.org/challenge.html, Footnote 4: http://qwone.com/~jason/20Newsgroups/] |
| Dataset Splits | Yes | Hence, we simply use a validation set containing a small number of labeled samples selected randomly from target domain to select parameters for feature learning algorithms. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions using a 'linear SVM (Chang and Lin 2011)' and discusses other methods but does not provide specific version numbers for any software dependencies needed for reproducibility. |
| Experiment Setup | Yes | There are three parameters in our method: the intensity of non-linear transformation α, the regularizer coefficient λ and the number of layers. We study the effects of these parameters on B D, Public U0 and Comp vs. Rec datasets. We fixed the number of layers and plotted the accuracies with different values of α and λ in Figure 1. The number of layers are 5, 3 and 3 for B D, Public U0 and Comp vs. Rec datasets respectively. |