Locality Preserving Projections for Grassmann manifold

Authors: Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun, Haoran Chen, Muhammad Ali, Baocai Yin

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of our proposed method is assessed on several classification and clustering tasks and the experimental results show its clear advantages over other Grassmann based algorithms.
Researcher Affiliation Academia 1Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Faculty of Information Technology, Beijing University of Technology, Beijing, China 2Discipline of Business Analytics. The University of Sydney Business School, University of Sydney, NSW 2006, Australia 3Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, China
Pseudocode Yes Algorithm 1 LPP for Grassmann manifold.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Extended Yale B dataset1 is captured from 38 subjects... 1http://vision.ucsd.edu/content/yale-face-database; Highway Traffic dataset2 contains 253 video sequences... 2http://www.svcl.ucsd.edu/projects/traffic/; UCF sport dataset3 includes a total of 150 sequences... 3http://crcv.ucf.edu/data/
Dataset Splits Yes Table 2: Classification results (in %) on different datasets. We also list the number of samples in the first two columns. Evaluation Num of Samples ACC Methods Training Testing... Dataset Extended Yale B 38 sub 221 76...
Hardware Specification Yes All the algorithms are coded in Matlab 2014a and implemented on an Intel Core i7-4600M 2.9GHz CPU machine with 8G RAM.
Software Dependencies Yes All the algorithms are coded in Matlab 2014a and implemented on an Intel Core i7-4600M 2.9GHz CPU machine with 8G RAM.
Experiment Setup Yes For simplification and fairness, here we set r = 0.95 in all our experiments. The performance of different algorithms is evaluated by Accuracy (ACC) and we also add Normalized Mutual Information (NMI) as an additional evaluation method for clustering algorithms. ... We set K = 5 for GKNN algorithm in all three experiments, and the number of training and testing samples are listed in the first two columns in Table 2, while other parameters can be found in Table 1 (i.e., D, d and r).