Group Sparse Additive Machine

Authors: Hong Chen, Xiaoqian Wang, Cheng Deng, Heng Huang

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic data and seven benchmark datasets consistently show the effectiveness of our new approach. In Section 4, experimental results on both simulated examples and real data are presented and discussed.
Researcher Affiliation Academia Hong Chen1, Xiaoqian Wang1, Cheng Deng2, Heng Huang1 1 Department of Electrical and Computer Engineering, University of Pittsburgh, USA 2 School of Electronic Engineering, Xidian University, China
Pseudocode No For given {τj}, the optimization problem of Group SAM can be computed efficiently via an accelerated proximal gradient descent algorithm developed in [30]. Due to space limitation, we don’t recall the optimization algorithm here again.
Open Source Code No The paper does not provide an explicit statement about releasing the source code for Group SAM or a link to a code repository.
Open Datasets Yes In this subsection, we use 7 benchmark data from UCI repository [12] to compare the classification performance of different methods. The 7 benchmark data includes: Ecoli, Indians Diabetes, Breast Cancer, Stock, Balance Scale, Contraceptive Method Choice (CMC) and Fertility.
Dataset Splits Yes In comparison, we adopt 2-fold cross validation and report the average performance of each method. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. classification accuracy of each method.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions using 'LIBSVM toolbox [2]' but does not specify its version number or any other software dependencies with versions.
Experiment Setup Yes We determine the hyper-parameter of all models, i.e., parameter C of SVM, L1SVM and Gaussian SVM, parameter λ of SAM, parameter λ of Group Sp AM, parameter λ in Eq. (6) of Group SAM, in the range of {10-3, 10-2, . . . , 103}. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. classification accuracy of each method. In the accelerated proximal gradient descent algorithm for both SAM and Group SAM, we set µ = 0.5, and the number of maximum iterations as 2000.