Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Soft Margin Support Vector Classification as Buffered Probability Minimization
Authors: Matthew Norton, Alexander Mafusalov, Stan Uryasev
JMLR 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The main contribution of this paper, though, is not a new SVM formulation with computational or generalization benefits. The main contribution of this paper is proof that soft margin support vector classification is equivalent to simple b POE minimization. Additionally, we show that the C-SVM, EC-SVM, ν-SVM, and Eν-SVM fit nicely into the general framework of superquantile and b POE minimization problems. This allows us to gain interesting and surprising insights, interpreting soft margin support vector optimization with newly developed statistical tools. |
| Researcher Affiliation | Academia | Matthew Norton EMAIL Alexander Mafusalov EMAIL Stan Uryasev EMAIL Risk Management and Financial Engineering Lab Department of Industrial and Systems Engineering University of Florida Gainesville, FL 32611, USA. |
| Pseudocode | Yes | Initialize: Minimize (30) w.r.t. (w, b), fixing δi = 0 for all i = 1, ..., N. Denote optimal point as (w , b ). Step 1: Let ˆδi = argmax δi C w T δi for all i = 1, ..., N. Step 2: Minimize (30) w.r.t. (w, b), fixing δi = ˆδi for all i = 1, ..., N. Denote the optimal point as (w , b ). Step 3: If iteration limit reached or convergence criteria is satisfied, STOP. Else, return to Step 1. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing code, nor does it provide links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | The paper is theoretical and focuses on mathematical equivalences and proofs. It discusses the general concept of datasets in the context of SVMs but does not describe any experiments performed on specific datasets or provide access information for any dataset. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments on specific datasets, therefore, no dataset splits are mentioned. |
| Hardware Specification | No | The paper is theoretical and does not describe any computational experiments; thus, no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe any computational experiments; thus, no specific software dependencies with version numbers are mentioned. It mentions 'convex optimization software' in general terms without versions. |
| Experiment Setup | No | The paper is theoretical and focuses on mathematical formulations, equivalences, and generalization bounds rather than empirical evaluation. Therefore, there are no specific details regarding experimental setup, hyperparameters, or training configurations. |