Fast Generalized Distillation for Semi-Supervised Domain Adaptation

Authors: Shuang Ao, Xiang Li, Charles Ling

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that GDSDA-SVM can effectively utilize the unlabeled data to transfer the knowledge between different domains under the SDA setting.
Researcher Affiliation Academia Shuang Ao, Xiang Li, Charles X. Ling Department of Computer Science, The University of Western Ontario sao@uwo.ca, lxiang2@uwo.ca, cling@csd.uwo.ca
Pseudocode Yes Algorithm 1 GDSDA-SVM, Algorithm 2 λ Optimization
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Dataset: We use the domain adaptation benchmark dataset Office as our experiment dataset. There are 3 subsets in Office dataset, Webcam (795 examples), Amazon (2817 examples) and DSLR (498 examples), sharing 31 classes. We denote them as W, A and D respectively.
Dataset Splits Yes By minimizing the LOOCV loss on the target data, we can find the optimal imitation parameter.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Alexnet', 'multi-layer perception (MLP)', 'SVM', and 'LIBLINEAR', but does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For GDSDA-SVM, as we are not able to tune the temperature T, we empirically set T = 20 for all experiments in this subsection. Specifically, we search the imitation parameter λ1 in the range [0, 0.1, ..., 1] with different temperature T. We use temperature T = 5.