Sparse Reject Option Classifier Using Successive Linear Programming

Authors: Kulin Shah, Naresh Manwani4870-4877

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the effectiveness of the proposed approach by experimenting it on several real world datasets. The proposed approach not only performs comparable to the state of the art, it also successfully learns sparse classifiers.
Researcher Affiliation Academia Kulin Shah, Naresh Manwani Machine Learning Lab, KCIS IIIT, Hyderabad-500032
Pseudocode Yes Algorithm 1 Sparse Double Ramp SVM (SDR-SVM)
Open Source Code No The paper mentions using the 'CVXOPT package in python language' and states 'We used the code for this approach available online (Fumera and Roli 2002b)' for a comparison method (ER-SVM). However, it does not provide concrete access or an explicit statement about releasing the source code for its own proposed methodology (SDR-SVM).
Open Datasets Yes We report experimental results on five datasets ( Ionosphere , Parkinsons , Heart , ILPD and Pima Indian Diabetes ) available on UCI machine learning repository (Lichman 2013). Regularization parameter λ and kernel parameter γ are chosen using 10-fold cross validation. The results provided here are based on 10 repetitions of 10-fold cross-validation (CV).
Dataset Splits Yes Regularization parameter λ and kernel parameter γ are chosen using 10-fold cross validation. The results provided here are based on 10 repetitions of 10-fold cross-validation (CV).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies Yes In the proposed approach, to solve linear programming problem in each iteration, we have used CVXOPT package in python language (Dahl and Vandenberghe 2008).
Experiment Setup Yes In all the experiments, we set µ = 1. Regularization parameter λ and kernel parameter γ are chosen using 10-fold cross validation.