Semi-supervised Orthogonal Graph Embedding with Recursive Projections
Authors: Hanyang Liu, Junwei Han, Feiping Nie
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiment on several benchmarks demonstrates the significant improvement over the existing methods. |
| Researcher Affiliation | Academia | Hanyang Liu1, Junwei Han1 , Feiping Nie1,2 1 Northwestern Polytechnical University, Xi an 710072, P. R. China 2 University of Texas at Arlington, USA |
| Pseudocode | Yes | Algorithm 1 Algorithm to solve problem in Eq.(16) |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | In our experiments, we use six real world benchmarks including three face benchmarks (JAFFE1, AT&T2, and CMU-PIE), a handwritten digits dataset MNIST, and two object benchmarks (COIL-20 and MPEG73). 1http://www.kasrl.org/jaffe.html 2http://www.cl.cam.ac.uk/research/dtg/attarchive.html 3http://www.dabi.temple.edu/ shape/MPEG7/dataset.html |
| Dataset Splits | No | The paper describes training and testing splits and labeled/unlabeled data, but does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | In SOGE, we set the weight µ in the diagonal matrix U as 100 for all datasets. In order to fairly compare SOGE with other algorithms, we tuned all the regularization parameters involved in each algorithms with grid search within {10 9, 10 6, 10 3, 100, 103, 106, 109}. For all the algorithms, we employ the k-nearest neighbor (k NN) classifier to evaluate the performance of dimensionality reduction, and set k = 1 in k NN for all the algorithms. For all the datasets, we use PCA as a preprocessing procedure to denoise all the data with 95% of the information preserved, similarly as in [Yan et al., 2007]. |