Top-k Multiclass SVM
Authors: Maksim Lapin, Matthias Hein, Bernt Schiele
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on five datasets show consistent improvements in top-k accuracy compared to various baselines. Finally, extensive experiments on several challenging computer vision problems show that top-k multiclass SVM consistently improves in top-k error over the multiclass SVM (equivalent to our top-1 multiclass SVM), one-vs-all SVM and other methods based on different ranking losses [11, 16]. |
| Researcher Affiliation | Academia | 1Max Planck Institute for Informatics, Saarbrücken, Germany 2Saarland University, Saarbrücken, Germany |
| Pseudocode | Yes | Algorithm 1 Top-k Multiclass SVM |
| Open Source Code | Yes | We release our implementation of the projection procedures and both SDCA solvers as a C++ library2 with a Matlab interface. 2https://github.com/mlapin/libsdca |
| Open Datasets | Yes | We evaluate our method on five image classification datasets of different scale and complexity: Caltech 101 Silhouettes [26] (m = 101, n = 4100), MIT Indoor 67 [20] (m = 67, n = 5354), SUN 397 [29] (m = 397, n = 19850), Places 205 [30] (m = 205, n = 2448873), and Image Net 2012 [22] (m = 1000, n = 1281167). |
| Dataset Splits | Yes | We cross-validate hyper-parameters in the range 10−5 to 103, extending it when the optimal value is at the boundary. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory, or cluster specifications) used for running experiments are explicitly stated. |
| Software Dependencies | No | The paper mentions several software tools and libraries (e.g., Lib Linear, SVMPerf, Caffe), but no specific version numbers are provided for any of them. |
| Experiment Setup | No | The paper states, 'We cross-validate hyper-parameters in the range 10−5 to 103', but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) used for the final models or other detailed training configurations. It references external tools, which would imply their default or documented settings. |