Deep Nonlinear Feature Coding for Unsupervised Domain Adaptation
Authors: Pengfei Wei, Yiping Ke, Chi Keong Goh
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Results In this section, we evaluate the performance of DNFC by comparing with state-of-the-art domain adaptation methods. We also study the properties of DNFC and analyze the effect of the two new elements in DNFC. |
| Researcher Affiliation | Collaboration | Pengfei Wei1, Yiping Ke1, Chi Keong Goh2 Nanyang Technological University, Singapore1, Rolls-Royce Advanced Technology Centre, Singapore2 |
| Pseudocode | Yes | Algorithm 1 Deep Nonlinear Feature Coding Input: Source data matrix Xsrc, target data matrix Xtar, and the number of layers L. for k = 1 to L do 1. Select kernel function using cross validation on source; 2. Learn coding Zk tar by Eq. (9); 3. Set Xk+1 src and Xk+1 tar. end for Output: Feature coding {Zk tar}, (k = 1, ..., L). |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We use two benchmark datasets that are widely used in domain adaptation. Amazon product review dataset [Blitzer et al., 2007]... 20-Newsgroups [Dai et al., 2007] |
| Dataset Splits | Yes | When a domain is selected as source (target), all the samples in this domain are used as training (test) data... In our DNFC, we conduct the cross validation on source to automatically select between rbf and linear kernels at each layer. For m SDA and DNFC, we set the default number of layers as three and do the cross validation on source to select the best corruption probability p between 0.1 and 0.9 with step size 0.1. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific solver versions). |
| Experiment Setup | Yes | In our DNFC, we conduct the cross validation on source to automatically select between rbf and linear kernels at each layer. For m SDA and DNFC, we set the default number of layers as three and do the cross validation on source to select the best corruption probability p between 0.1 and 0.9 with step size 0.1. ... we use 1-NN as the base classifier since it avoids model parameter tuning. |