AccGenSVM: Selectively Transferring from Previous Hypotheses
Authors: Diana Benavides-Prado, Yun Sing Koh, Patricia Riddle
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here, we present experiments and discuss results for binary classification using Acc Gen SVM and other publicly available HTL methods for homogeneous transfer. |
| Researcher Affiliation | Academia | Diana Benavides-Prado Dept. of Computer Science The University of Auckland dben652@aucklanduni.ac.nz; Yun Sing Koh Dept. of Computer Science The University of Auckland ykoh@cs.auckland.ac.nz; Patricia Riddle Dept. of Computer Science The University of Auckland pat@cs.auckland.ac.nz |
| Pseudocode | Yes | Algorithm 1: Pseudo-code for transfer with Acc Gen SVM |
| Open Source Code | Yes | Acc Gen SVM1 is built on top of Lib SVM [Chang and Lin, 2011], using available KL divergence [Hausser and Strimmer, 2014] and FNN implementations [Beygelzimer et al., 2013]. 1Software available at: https://github.com/nanarosebp/ Ph DProject/tree/master/Acc Gen SVM |
| Open Datasets | Yes | Caltech256 is a benchmark dataset on image recognition... Image Net is a large benchmark dataset... Office is a small dataset... Caltech-256 object category dataset. [Griffin et al., 2007] |
| Dataset Splits | No | The paper mentions training and test splits, but no explicit validation set split is described. 'Transfer level: we extract random samples of sizes 10%, 20% and 30% for training. ... We also select independent random binary samples of size 20%, 30%, and 50% as test sets.' |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models or memory specifications) used for running experiments are provided. |
| Software Dependencies | Yes | Acc Gen SVM1 is built on top of Lib SVM [Chang and Lin, 2011], using available KL divergence [Hausser and Strimmer, 2014] and FNN implementations [Beygelzimer et al., 2013]. For FNN, 'R package version 1.1, 2013.' is cited. For KL divergence, 'R package v.1.2.1.' is cited. |
| Experiment Setup | Yes | Based on a sensitivity analysis of the regularization parameter C and the γ parameter for the RBF kernel, we set C = 1 and γ = 1/f, with f number of features. We use a KL divergence threshold of 0.3 for all datasets. For FNN, we work with 3 nearest neighbours. |