Multi-Class Support Vector Machine via Maximizing Multi-Class Margins

Authors: Jie Xu, Xianglong Liu, Zhouyuan Huo, Cheng Deng, Feiping Nie, Heng Huang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiment, it shows that our model can get better or compared results when comparing with other related methods. ... Experiments on benchmark datasets show that our model can get equal or better results than other related methods.
Researcher Affiliation Academia 1Xidian University, Xi an 710071, China 2School of Computer Science and Engineering, Beihang University, China 3University of Texas at Arlington, USA 4Northwestern Polytechnical University, China
Pseudocode Yes Algorithm 1 SVRG to solve problem (25)
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the methodology described.
Open Datasets Yes Six multi-class classification datasets from UCI machine learning repository are used in our experiment [Lichman, 2013], main information are listed in Table 1.
Dataset Splits Yes We use 5 times 5-fold cross validation and compute average accuracy for each method as final performance.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or programming language versions).
Experiment Setup Yes In all experiments, we automatically tune the parameters by selecting among the values {10r, r { 5, ..., 5}}. We select the largest learning rate for each method and ensure that objective function value is decreasing during optimization.