Parametric Dual Maximization for Non-Convex Learning Problems

Authors: Yuxun Zhou, Zhaoyi Kang, Costas Spanos

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two representative applications demonstrate the effectiveness of PDM compared to other approaches.In this section, we report optimization and generalization performance of PDM for the training of S3VM and Latent SVM (LSVM).
Researcher Affiliation Academia Yuxun Zhou Department of EECS UC Berkeley yxzhou@berkeley.edu Zhaoyi Kang Department of EECS UC Berkeley kangzy@berkeley.edu Costas J. Spanos Department of EECS UC Berkeley spanos@berkeley.edu
Pseudocode Yes Algorithm 1 Parametric Dual Maximization
Open Source Code No More results and a Matlab implementation could be found online. This statement is too vague and does not provide concrete access (e.g., a specific link) to the source code for the methodology described.
Open Datasets Yes Details about the datasets are listed in Table 1.Table 1: Data sets. D4-D3 for S3VM and D5-D8 for LSVM
Dataset Splits Yes In each experiment, 60% of the samples are used for training, in which only a small portion are assumed to be labeled samples. 10% of the data are used as a validation set for choosing hyperparameters. With the remaining 30%, we evaluate the generalization performance.
Hardware Specification Yes All experiments are conducted on a workstation with Dual Xeon x5687 CPUs and 72GB memory.
Software Dependencies No The paper mentions "Matlab implementation" but does not provide specific version numbers for Matlab or any other software libraries/solvers used in the experiments.
Experiment Setup Yes best hyperparamter combination C1, C2, σ2 are chosen with cross validation from C1 {100:0.5:3}, σ2 {(1/2) 3:1:3} and C2 {10 8:1:0} for S3VM and C2 {10 4:1:4} for LSVM.