SP-SVM: Large Margin Classifier for Data on Multiple Manifolds
Authors: Bin Shen, Bao-Di Liu, Qifan Wang, Yi Fang, Jan Allebach
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A set of experiments on realworld benchmark data sets show that SP-SVM achieves significantly higher precision on recognition task than various competitive baselines including the traditional SVM, the sparse representation based method and the classical nearest neighbor classifier. |
| Researcher Affiliation | Academia | Bin Shen , Bao-Di Liu , Qifan Wang , Yi Fang , Jan P. Allebach Department of Computer Science, Purdue University, West Lafayette, IN. 47907, USA College of Information and Control Engineering, China University of Petroleum, Qingdao 266580, China Department of Computer Engineering, Santa Clara University, Santa Clara, CA. 95053, USA School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN. 47907, USA |
| Pseudocode | Yes | Algorithm 1 SP-SVM Training Require: Data samples X = [x1, x2, ..., xn], sample labels Y = [y1, y2, ..., yn], λ, ν and k 1: for i = 1;k n;i++ do 2: Compute Si according to Equation 4 3: end for 4: Formulate S = [S1, S2, ..., Sn] and compute T 5: Compute the w by solving the quadratic programming problem in Equation 6 6: return w |
| Open Source Code | No | For SVM, the implementation of LIBSVM (Chang and Lin 2011) is adopted for its robustness and popularity. |
| Open Datasets | Yes | In the experiments, three benchmark data sets including Extended Yale B database (Georghiades, Belhumeur, and Kriegman 2001), AR database (Martinez 1998), and CMU PIE database (Sim, Baker, and Bsat 2002) are used to evaluate the performance of our proposed algorithm. |
| Dataset Splits | Yes | For each data set, the data are randomly split into training set and testing set. We randomly select images per category for training and the rest for testing. Since for the face recognition task we usually have very limited training samples for each person in real situations, our experiments use 10 training samples per category. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | For SVM, the implementation of LIBSVM (Chang and Lin 2011) is adopted for its robustness and popularity. For SP-SVM, S is calculated according to Equation 4, where N(xi) denotes the set of k nearest neighbors of xi. In our implementation, we use Quasi-Newton method to minimize the objective function by relying on the subgradient. |
| Experiment Setup | Yes | In our experiments, there are four parameters to set: C, ν, k and λ. C is varying on the grid of {20, 21, , 25} for both SVM and SP-SVM. ν is varying on the grid of {2 1, 20, , 24}. λ is studied on the grid of {10 4, 10 3, , 101} for sparse representation algorithm. For k, we consider the set {1, 3, 5, 7, 9}. For the main experiments, k is set to 5, since the performance is not sensitive to k when k 3 as we will show below. |