Complex Moment-Based Supervised Eigenmap for Dimensionality Reduction

Authors: Akira Imakura, Momo Matsuda, Xiucai Ye, Tetsuya Sakurai3910-3918

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments indicate that the proposed method is competitive compared with the existing dimensionality reduction methods for the recognition performance.
Researcher Affiliation Academia Akira Imakura, Momo Matsuda, Xiucai Ye, Tetsuya Sakurai University of Tsukuba 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
Pseudocode Yes Algorithm 1 A complex moment-based supervised eigenmap for dimensionality reduction
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes As test problems, we treat the binary and multiclass classification problems obtained from (Le Cun 1998; Samaria and Harter 1994) and feature selection datasets which is available at http://featureselection.asu.edu/ datasets.php. In these numerical experiments, k of the k-nearest neighbor, the regularization parameter of K RR and (µ, b, L) of K CMSE are tuned by applying a line search to each parameter sequentially and by a 10-fold cross-validation until convergence. Then, the performance of each method with the tuned parameters is evaluated by a 10-fold cross-validation using a different validation set from that used for parameter tuning. The numerical results (average standard error) are summarised in Table 1. For the test problem, we use the 10-class classification MNIST with 60,000 training data points (Le Cun 1998).
Dataset Splits Yes In these numerical experiments, k of the k-nearest neighbor, the regularization parameter of K RR and (µ, b, L) of K CMSE are tuned by applying a line search to each parameter sequentially and by a 10-fold cross-validation until convergence. Then, the performance of each method with the tuned parameters is evaluated by a 10-fold cross-validation using a different validation set from that used for parameter tuning.
Hardware Specification Yes The numerical experiments were conducted on COMA at the Center for Computational Sciences, University of Tsukuba, Japan. COMA has two Intel Xeon E5-2670v2 (2.5 GHz) processors and two Intel Xeon Phi 7110P (61 cores) processors per node. In this numerical experiment, we use only the CPU.
Software Dependencies Yes Numerical experiments I and II were performed using MATLAB2017b, and numerical experiment III was performed using Fortran 90 and MPI. The sparse linear systems (9) are solved using cluster sparse solver in Intel MKL.
Experiment Setup Yes For K CMSE, we use the same matrices A1 and A2 as those used for K LPP. We also use (M, N, δ) = (8, 32, 10 15), which are the default parameters for complex moment-based eigensolvers. The input matrix V is a random matrix generated by the Mersenne Twister. We set Ω= [0, b] and the quadrature points as on an ellipse with center γ = b/2, major axis ρ = b/2 and aspect ratio α = 0.1 as follows: zj = γ + ρ (cos(θj) + αi sin(θj)) , θj = 2π(j 1)/N for j = 1, 2, . . . , N/2. The corresponding weights are set as ωj = 2πρ/N (α cos(θj) + i sin(θj)) for j = 1, 2, . . . , N/2. The nonlinear function f( ) is defined as f(λ) = 1/(b λ)2. We solve the UOP problem (8) using an iterative method (Zhao, Wang, and Nie 2016). In the training phase, we use the ground truth Z as a binary matrix whose (i, j) entry is 1 if the training data xj is in class i. This type of ground truth Z is used for several classification algorithms including the ridge regression and deep neural networks (Bishop 2006). Then, in the prediction phase, we firstly apply the trained dimensionality reduction and apply the k-nearest neighbors (Altman 1992) for classification to the obtained low dimensional data.