Flexible Orthogonal Neighborhood Preserving Embedding

Authors: Tianji Pang, Feiping Nie, Junwei Han

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on several benchmark databases demonstrate the effectiveness of our algorithm.
Researcher Affiliation Academia Tianji Pang1, Feiping Nie1,2, Junwei Han1 1Northwestern Polytechnical University, Xian 710072, P. R. China. 2University of Texas at Arlington, USA
Pseudocode Yes Algorithm 1: The Algorithm of Solving (8) Input: Training data X Rm n, parameter β, the reduction dimension d, the number of nearest neighbors k. Initialize P = I and F = X; while not converge do 1. Update W by solving (10); 2. Update P. The columns of the updated P are the first d eigenvectors of Q corresponding to the first d smallest eigenvalues, where Q can be calculate by (14); 3. Update F by (11). Output: P Rm d
Open Source Code No The paper does not provide a concrete access to source code for the methodology described in this paper, nor does it explicitly state that code is available.
Open Datasets Yes In this paper, we employ two well known synthetic data from [Roweis and Saul, 2000]: the s-curve and the swissroll. ... The Yale B data set [Georghiades et al., 2001] ... The PIE data set [Sim et al., 2002] ... The ORL data set [Samaria and Harter, 1994] ... The UMIST data set [Graham and Allinson, 1995]
Dataset Splits No 50 percent images of each individual are randomly selected with labels to form the training set and the rest of the data set is used as the testing set. The paper specifies a training and testing set split but does not mention a separate validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using a "SVM classifier" and "linear kernel", but it does not provide specific software dependencies or version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4).
Experiment Setup Yes The affinity graphs of LPP, NPE and FONPE are all constructed using k = 6 nearest neighbor points. ... In our experiments, we keep 95 percent information in the sense of reconstruction error. ... The training samples are used to learn a projection. The testing samples are then projected into the reduced space. Recognition is performed using a SVM classifier. We utilize the linear kernel with the parameter C = 1. ... we run LPP, NPE and FONPE when k = 3, 5, 7, 9, 11, 13, respectively. ... The parameter [β] is selected from 10 7, 10 6, ..., 1017.