R-SVM+: Robust Learning with Privileged Information

Authors: Xue Li, Bo Du, Chang Xu, Yipeng Zhang, Lefei Zhang, Dacheng Tao

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on realworld datasets demonstrate the necessity of studying robust SVM+ and the effectiveness of the proposed algorithm.
Researcher Affiliation Academia Xue Li1,2, Bo Du 1, Chang Xu3, Yipeng Zhang1, Lefei Zhang1, Dacheng Tao3 1 School of Computer Science, Wuhan University, China 2 LIESMARS, Wuhan University, China 3 UBTECH Sydney AI Centre, SIT, FEIT, University of Sydney, Australia
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The MNIST+ dataset [Vapnik and Vashist, 2009], the RGB-D Face dataset [Hg et al., 2012], and the Human Activity Recognition dataset [Anguita et al., 2013].
Dataset Splits Yes For the MNIST+ dataset, it is randomly split into a training set of 100 images, a test set of 1866 images, and a validation set of 4002 [Vapnik and Vashist, 2009]. For the RGB-D Face dataset, ... We randomly split 40% color and corresponding depth image pairs per class for training, 30% image pairs per class for testing, and the rest 30% for validation for 10 times. ... The remaining examples from the desired class and the same number of examples from the rest of classes are used as the validation examples.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running experiments.
Software Dependencies No The paper mentions that the quadratic programming problem can be 'efficiently optimized using off-the-shelf solvers' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific solver libraries with their versions).
Experiment Setup Yes For all the methods, the regularization parameter C are selected from 10{ 2,1,0,1,2} and the Gaussian kernel is used. For SVM+, L2-SVM+ and the proposed R-SVM+, we set the parameter of Gaussian kernel γ = 1 D where D is the mean of distances among examples in the training set according to [Li et al., 2016]. While for SVM and RSVM-RHHQ, γ is selected from 10{ 3, 2, 1,0,1,2,3}. For RSVM-RHHQ, the scaling constant η is varied in range of {0.01, 0.1, 0.5, 1, 2, 3, 10, 100}. For SVM+-based methods, the trade-off parameter ρ is selected from 10{ 2, 1,0,1,2}. For the proposed R-SVM+, we also vary the parameter σ in range of {5, 10, 50, 100} and λ in range of 10{ 5, 4,...,0,1}. The best parameters for all methods are determined with a joint cross validation model selection strategy on the validation set.