Bayesian Nonlinear Support Vector Machines and Discriminative Factor Modeling

Authors: Ricardo Henao, Xin Yuan, Lawrence Carin

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive set of experiments demonstrate the utility of using a nonlinear Bayesian SVM within discriminative feature learning and factor modeling, from the standpoints of accuracy and interpretability.
Researcher Affiliation Academia Ricardo Henao, Xin Yuan and Lawrence Carin Department of Electrical and Computer Engineering Duke University, Durham, NC 27708 {r.henao,xin.yuan,lcarin}@duke.edu
Pseudocode No The paper describes algorithms and inference procedures in detail using prose and mathematical equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper states 'All code used in the experiments was written in Matlab...' but does not provide any explicit statement about making the code open-source or offer a link to a code repository.
Open Datasets Yes We first compare the performance of the proposed Bayesian hierarchy for nonlinear SVM (BSVM) against EP-based GP classification (GPC) and an optimization-based SVM, on six well known benchmark datasets. (...) USPS handwritten digits dataset, consisting of 1540 gray scale 16 16 images (...) The dataset originally introduced in [24] consists of gene expression measurements from primary breast tumor samples...
Dataset Splits Yes The parameters of the SVM {γ, θ} are obtained by grid search using an internal 5-fold cross-validation. (...) validation is done by 10-fold cross-validation.
Hardware Specification Yes All code used in the experiments was written in Matlab and executed on a 2.8GHz workstation with 4Gb RAM.
Software Dependencies No The paper states that the code was written in 'Matlab' but does not specify any version numbers for Matlab or for any other software libraries or dependencies used.
Experiment Setup Yes In all experiments we set the covariance function to (i) either the square exponential (SE)... or (ii) the automatic relevance determination (ARD) SE... (...) The parameters of the SVM {γ, θ} are obtained by grid search using an internal 5-fold cross-validation. (...) For our model we set 200 as the maximum number of iterations of the ECM algorithm and run ML-II every 20 iterations. (...) For inference, we set K = 10, a SE covariance function and run the sampler for 1200 iterations, from which we discard the first 600 and keep every 10-th for posterior summaries.