A Minimax Approach to Supervised Learning

Authors: Farzan Farnia, David Tse

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform several numerical experiments to show the power of the minimax SVM in outperforming the SVM.
Researcher Affiliation Academia Farzan Farnia farnia@stanford.edu David Tse dntse@stanford.edu Department of Electrical Engineering, Stanford University, Stanford, CA 94305.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluated the performance of the minimax SVM on six binary classification datasets from the UCI repository
Dataset Splits No We determined the parameters by cross validation, where we used a randomly-selected 70% of the training set for training and the rest 30% for testing. ... We performed this procedure in 1000 Monte Carlo runs each training on 70% of the data points and testing on the rest 30% and averaged the results.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Matlab s SVM command' but does not provide specific version numbers for Matlab or any other software dependencies.
Experiment Setup Yes We implemented the minimax SVM by applying the subgradient descent to (18) with the regularizer λ α 2 2. We determined the parameters by cross validation, where we used a randomly-selected 70% of the training set for training and the rest 30% for testing. We tested the values in {2 10, . . . , 210}.