On Robustness and Regularization of Structural Support Vector Machines

Authors: Mohamad Ali Torkamani, Daniel Lowd

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that our method outperforms the nonrobust structural SVMs on real world data when the test data distribution has drifted from the training data distribution.
Researcher Affiliation Academia Mohamad Ali Torkamani ALI@CS.UOREGON.EDU Daniel Lowd LOWD@CS.UOREGON.EDU Computer and Information Science Department, University of Oregon
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The expanded political blogs dataset and our robust SVM implementation can be downloaded from the following URL: http://ix.cs.uoregon.edu/ lowd/ robustsvmstruct.
Open Datasets Yes We introduce a new dataset based on the political blogs dataset collected by Adamic and Glance (2005).
Dataset Splits Yes We partitioned the blogs into three separate sub-networks and used three-way cross-validation, training on one subnetwork, using the next as a validation set for tuning parameters, and evaluating on the third.
Hardware Specification No The paper mentions using the Gurobi optimization engine but does not specify any hardware details like CPU, GPU, or memory.
Software Dependencies Yes We learned parameters using a cutting plane method, implemented using the Gurobi optimization engine 5.60 (2014) for running all integer and quadratic programs.
Experiment Setup Yes Standard structural SVMs have one parameter C that needs to be tuned. The robust method has an additional regularization parameter C = 1/Be = 1/Bw which scales the strength of the robust regularization. We chose these parameters from the semi-logarithmic set {0, .001, .002, .005, .1, . . . , 10, 20, 50}. We learned parameters using a cutting plane method, implemented using the Gurobi optimization engine 5.60 (2014) for running all integer and quadratic programs. We ran for 50 iterations and selected the weights from the iteration with the best performance on the tuning set.