K-hyperplane Hinge-Minimax Classifier
Authors: Margarita Osadchy, Tamir Hazan, Daniel Keren
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose an algorithm for training an intersection of hyperplanes that efficiently minimizes the minimax-hinge risk. We show empirically on two real sets with very differing characteristics, that this algorithm substantially improves over linear classifiers; further, it is on par with the classification rate of ensemble methods (comprising more than a 100 simple classifiers compared to 2-4 hyperplanes in the hinge-minimax classifier) and it even approached the classification performance of kernel SVM, but is orders of magnitude faster. To test the proposed K-hyperplane hinge-minimax classifier, we ran experiments in three different scenarios: synthetic 2D data, letter recognition, and large scale scene classification. |
| Researcher Affiliation | Academia | Margarita Osadchy RITA@CS.HAIFA.AC.IL Tamir Hazan TAMIR@CS.HAIFA.AC.IL Daniel Keren DKEREN@CS.HAIFA.AC.IL Department of Computer Science, University of Haifa, Mount Carmel, Haifa 31905, Israel |
| Pseudocode | Yes | Algorithm 1 Intersection of K hyperplanes classifier |
| Open Source Code | No | The paper mentions using 'LIBSVM 3' and 'CVX optimization package 4' for parts of their implementation, but there is no statement or link indicating that the authors' own code for the proposed K-hyperplane hinge-minimax classifier is open-source or publicly available. |
| Open Datasets | Yes | These tests were performed on a data set of letters from the UCI Machine Learning Repository (Murphy & Aha, 1994) and In this test we used 397 scene categories of the SUN data base, which have at least 100 images per category (Xiao et al., 2010). We downloaded the features from the SUN web page5 |
| Dataset Splits | Yes | Each class was equally partitioned into training, validation, and test sets. and For each letter, we used 100 samples for training, 250 for validation, and the rest for test (about 400 samples per letter). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only mentions classification times in general terms. |
| Software Dependencies | Yes | The SVM classifiers were trained using C-SVC in LIBSVM 3. We used the CVX optimization package 4 to find a single hyperplane in Algorithm 1. The ensemble classifiers were trained using the Matlab Statistic toolbox. |
| Experiment Setup | No | The paper mentions 'tuned the parameters (C and γ) and the bias using the validation set' and 'The parameters of all methods have been chosen using the validation set'. However, it does not provide the specific values for these hyperparameters or other system-level training settings in the main text, making reproduction difficult without further information. |