Distant Transfer Learning via Deep Random Walk
Authors: Qiao Xiao, Yu Zhang10422-10429
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on several benchmark datasets demonstrate that the proposed DERWENT algorithm yields the state-of-the-art performance. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China 2Peng Cheng Laboratory, Shenzhen, China |
| Pseudocode | No | No pseudocode or algorithm block was found. |
| Open Source Code | No | The paper does not provide a statement about releasing open-source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on three benchmark datasets, including the Animals with Attributes (Aw A) dataset (Xian et al. 2019), the Caltech-256 dataset (Griffin, Holub, and Perona 2007), and the CIFAR-100 dataset (Krizhevsky and Hinton 2009). |
| Dataset Splits | Yes | In each experiment, we randomly selected 10 labeled instances in each class in the target domain for training and the rest for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like VGG-11, LSTM, and SGD, but does not provide specific version numbers for these or other ancillary software dependencies. |
| Experiment Setup | Yes | For optimization, we use the mini-batch SGD with the Nestorov momentum 0.9. The batch size is set to 128, including 10, 8, and 110 in the source, target and auxiliary domains, respectively. The learning rate is set to 0.01. η in the graph (i.e., Eq. (1)) is initialized to 1.1 and then increased according to epochs as 1.1 epochs/3 . All the regularization parameters in the DERWENT model are set to 1. |