Adversarial Multiclass Classification: A Risk Minimization Perspective
Authors: Rizal Fathony, Anqi Liu, Kaiser Asif, Brian Ziebart
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of the AL0-1 classifier and compare with the three most popular multiclass SVM formulations: WW [11], CS [10], and LLW [12]. We use 12 datasets from the UCI Machine Learning repository [30] with various sizes and numbers of classes (details in Table 1). For each dataset, we consider the methods using the original feature space (linear kernel) and a kernelized feature space using the Gaussian radial basis function kernel. We report the accuracy of each method averaged over the 20 dataset splits for both linear feature representations and Gaussian kernel feature representations in Table 2. |
| Researcher Affiliation | Academia | Rizal Fathony Anqi Liu Kaiser Asif Brian D. Ziebart Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 {rfatho2, aliu33, kasif2, bziebart}@uic.edu |
| Pseudocode | Yes | Algorithm 1 Constraint generation method |
| Open Source Code | No | The paper does not provide any statement about making its source code openly available or providing a link to a code repository. |
| Open Datasets | Yes | We use 12 datasets from the UCI Machine Learning repository [30] with various sizes and numbers of classes (details in Table 1). For each dataset, we consider the methods using the original feature space (linear kernel) and a kernelized feature space using the Gaussian radial basis function kernel. |
| Dataset Splits | Yes | We then perform two stage, five-fold cross validation on the training set of the first split to tune each model s parameter C and the kernel parameter γ under the kernelized formulation. In the first stage, the values for C are 2i, i = {0, 3, 6, 9, 12} and the values for γ are 2i, i = { 12, 9, 6, 3, 0}. We select final values for C from 2i C0, i = { 2, 1, 0, 1, 2} and values for γ from 2iγ0, i = { 2, 1, 0, 1, 2} in the second stage, where C0 and γ0 are the best parameters obtained in the first stage. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running experiments. |
| Software Dependencies | No | We use the Shark machine learning library [31] for the implementation of the three multiclass SVM formulations. |
| Experiment Setup | Yes | We then perform two stage, five-fold cross validation on the training set of the first split to tune each model s parameter C and the kernel parameter γ under the kernelized formulation. In the first stage, the values for C are 2i, i = {0, 3, 6, 9, 12} and the values for γ are 2i, i = { 12, 9, 6, 3, 0}. We select final values for C from 2i C0, i = { 2, 1, 0, 1, 2} and values for γ from 2iγ0, i = { 2, 1, 0, 1, 2} in the second stage, where C0 and γ0 are the best parameters obtained in the first stage. |