Signed Laplacian Embedding for Supervised Dimension Reduction

Authors: Chen Gong, Dacheng Tao, Jie Yang, Keren Fu

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Thorough empirical studies on synthetic and real datasets demonstrate the effectiveness of SLE.
Researcher Affiliation Collaboration Chen Gong ,? and Dacheng Tao? and Jie Yang and Keren Fu Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University ?Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney {goodgongchen, jieyang, fkrsuper}@sjtu.edu.cn dacheng.tao@uts.edu.au
Pseudocode No The paper describes the algorithm steps in text but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Six UCI datasets (Frank and Asuncion 2010), Breast Cancer, Musk, Waveform, Seeds, SPECT and Wine, were adopted to test the dimension reduction performances of the algorithms compared. Yale (Georghiades, Belhumeur, and Kriegman 2001) and Labeled Face in the Wild (LFW) (Gary et al. 2007).
Dataset Splits No The paper does not explicitly provide validation dataset splits. It mentions random splits into training and test sets but not a separate validation split or explicit cross-validation methodology.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper mentions software components and algorithms (e.g., RBF kernel, NNC classifier, QZ algorithm, Krylov subspace algorithm) but does not provide specific version numbers for any of them.
Experiment Setup Yes The RBF kernel width σ in LPP, MFA and DLA was chosen from the set {0.01, 0.1, 0.5, 1, 10}. The numbers of neighbors for graph construction, such as k in LPP, k1 and k2 in MFA and DLA, were chosen from {5, 10, 15, 20, 25, 30}. We built a 5NN graph for LPP with kernel width σ = 10. In MFA and DLA, we set k1 = 5 and k2 = 10 to obtain the best performances. k and σ were adjusted to 10 and 1 for LPP, and we set k1 = k2 = 10 for MFA and DLA.