Transfer Feature Representation via Multiple Kernel Learning

Authors: Wei Wang, Hao Wang, Chen Zhang, Fanjiang Xu

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments in two real-world applications verify the effectiveness of our proposed method.
Researcher Affiliation Academia 1. Science and Technology on Integrated Information System Laboratory 2. State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing 100190, China weiwangpenny@gmail.com
Pseudocode Yes Algorithm 1 Transfer Feature Representation
Open Source Code No No mention of open-source code for the described methodology or a link to a repository was found.
Open Datasets Yes FERET (Phillips et al. 2000) and YALE (Belhumeur, Hespanha, and Kriegman 1997) are two public face data sets.
Dataset Splits No Specifically, we search σd based on the validation set in the range {0.1, 1, 10}, σ in the range {0.01, 0.1, 1, 10, 100} and λ in the range {0.1, 1, 10}.
Hardware Specification No No specific hardware details were found.
Software Dependencies No No specific software dependencies with version numbers were found.
Experiment Setup Yes TFR involves four parameters: σd, σ, λ and k. Specifically, we search σd based on the validation set in the range {0.1, 1, 10}, σ in the range {0.01, 0.1, 1, 10, 100} and λ in the range {0.1, 1, 10}. [...] The neighborhood size k for TFR is 3. Basis kernel functions are predetermined for TFR: linear kernel and Gaussian kernels with 10 different bandwidths, i.e., 0.5, 1, 2, 5, 7, 10, 12, 15, 17, 20.