Distant Domain Transfer Learning
Authors: Ben Tan, Yu Zhang, Sinno Pan, Qiang Yang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on image classification problems demonstrate the effectiveness of the proposed algorithm, and on some tasks the improvement in terms of the classification accuracy is up to 17% over non-transfer methods. ... In this section, we conduct empirical studies to evaluate the proposed SLA algorithm from three aspects. |
| Researcher Affiliation | Academia | Hong Kong University of Science and Technology, Hong Kong **Nanyang Technological University, Singapore |
| Pseudocode | Yes | Algorithm 1 The Selective Learning Algorithm (SLA) |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The datasets used for experiments include Caltech-256 (Griffin, Holub, and Perona 2007) and Animals with Attributes (Aw A)3. ... 3http://attributes.kyb.tuebingen.mpg.de/ |
| Dataset Splits | No | The paper states 'for each target domain, we randomly sample 6 labeled instances for training, and use the rest for testing' but does not specify a separate validation split or explicit methodology for validation data. |
| Hardware Specification | No | The paper mentions deep learning models and CNNs but does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like SVM, CNN, DTL, STL, and specific kernel and layer types but does not provide any version numbers for programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | For SVM, we use the linear kernel. For CNN, we implement a network that is composed of two convolutional layers with kernel size 3 3, where each convolutional layer is followed by a max pooling layer with kernel size 2 2, a fully connected layer, and a logistic regression layer. ... Each configuration is repeated 10 times. |