Linear Time Solver for Primal SVM

Authors: Feiping Nie, Yizhen Huang, Heng Huang

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our algorithm is with stable performance and on average faster than the state-of-the-art solvers such as SVMperf, Pegasos and the Lib Linear that integrates the TRON, PCD and DCD algorithms. ... experimental results are given in Section 4
Researcher Affiliation Academia Feiping Nie FEIPINGNIE@GMAIL.COM Yizhen Huang HUANG.YIZHEN@GMAIL.COM Xiaoqian Wang XIAOQIAN.WANG93@MAVS.UTA.EDU Heng Huang HENG@UTA.EDU Computer Science and Engineering Department, University of Texas at Arlington, Arlington, TX, 76019
Pseudocode Yes Algorithm 1 Exact SVM-ALM for Lp-primal SVM; Algorithm 2 Inexact SVM-ALM for Lp-primal SVM
Open Source Code Yes All results presented in the manuscript are reproducible using the code and public datasets available online at https://sites.google.com/site/svmalm.
Open Datasets Yes We use 7 popularly adopted benchmark datasets from various sources for performance evaluations: UCI Forest (Collobert et al., 2002) ... ijcnn1 (Chang & Lin, 2001) ... Webpage (Platt, 1999) ... UCI Connect4 (Frank & Asuncion, 2010) ... Sens IT Vehicle (acoustic/seismic) (Duarte & Hu, 2004) ... Shuttle (Hsu & Lin, 2002) ... UCI Poker (Frank & Asuncion, 2010) ... Epsilon (Sonnenburg et al., 2008).
Dataset Splits Yes The five-fold cross validation is conducted (except in 4.3 when all samples are used for training) as in (Chang et al., 2008).
Hardware Specification Yes All experiments are conducted on an 8-core Intel Xeon X5460 3.16GHz (12M Cache, 1333 MHz FSB) Linux server with 32G memory.
Software Dependencies No The paper mentions software like "Lib Linear software toolbox", "MATLAB", but does not specify version numbers for any of these software dependencies.
Experiment Setup Yes For all experiments except in 4.3, we use the default value ǫ=0.01 as in Lib Linear. We terminate the algorithms when the objectives changes are less than 10 4. In our method, we empirically set the maximum iteration number as 100, because in all our experiments our algorithm converges within 100 iterations. ... With the same settings as in (Chang et al., 2008) (Hsieh et al., 2008) we compare the L1-SVM and L2-SVM solvers in term of the training time to reduce the objective function obj( ) such that the relative difference of obj to the optimum obj , (obj obj )/|obj |, is within 0.01. ... we set C = 1 for fair comparison.