Dropout Training for Support Vector Machines

Authors: Ning Chen, Jun Zhu, Jianfei Chen, Bo Zhang

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of linear SVMs.
Researcher Affiliation Academia Ning Chen Jun Zhu Jianfei Chen Bo Zhang State Key Lab of Intelligent Tech. & Systems; Tsinghua National TNList Lab; Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China {ningchen@mail, dcszj@mail, chenjf10@mails, dcszb@mail}.tsinghua.edu.cn
Pseudocode No The paper describes the Iteratively Re-weighted Least Square Algorithm verbally but does not present it in a structured pseudocode block or algorithm box.
Open Source Code No The paper states, "We implement both Dropout-SVM and Dropout-Logistic using C++," but it does not provide any link or explicit statement about the availability of the source code for the described methodology.
Open Datasets Yes We use the public Amazon book review and kitchen review datasets (Blitzer, Dredze, and Pereira 2007)... We choose the CIFAR-10 image categorization dataset8. (http://www.cs.toronto.edu/ kriz/cifar.html)... We choose the the MNIST dataset, which consists of 60,000 training and 10,000 testing handwritten digital images from 10 categories
Dataset Splits Yes The hyper-parameters are selected via cross-validation on the training set. ... During training, we choose the best models over different dropout levels via cross-validation.
Hardware Specification No The paper states, "We implement both Dropout-SVM and Dropout-Logistic using C++..." but does not specify any particular hardware components such as GPU or CPU models, memory, or specific computing environments used for the experiments.
Software Dependencies No The paper mentions implementation "using C++" but does not provide specific version numbers for the programming language or any other ancillary software dependencies like libraries or frameworks.
Experiment Setup Yes We consider the unbiased dropout (or blankout) noise model6, that is, p( x = 0) = q and p( x = 1 1 qx) = 1 q, where q 2 [0, 1) is a pre-specified corruption level. ... for each value of M we choose the dropout model with q selected by cross-validation. The hyper-parameter of the SVM classifier is also chosen via cross-validation on the training data. ... The hyper-parameters are selected via cross-validation on the training set.