Stochastic Online AUC Maximization

Authors: Yiming Ying, Longyin Wen, Siwei Lyu

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We establish theoretical convergence of SOLAM with high probability and demonstrate its effectiveness on standard benchmark datasets. In this section, we report experimental evaluations of the SOLAM algorithm and comparing its performance with existing state-of-the-art learning algorithms for AUC maximization.
Researcher Affiliation Academia Department of Mathematics and Statistics SUNY at Albany, Albany, NY, 12222, USA Department of Computer Science SUNY at Albany, Albany, NY, 12222, USA
Pseudocode Yes Table 1: Pseudo code of the proposed algorithm.
Open Source Code No The paper does not provide a link to open-source code or explicitly state its release.
Open Datasets Yes Information about these datasets is summarized in Table 2. Table 2 lists datasets such as diabetes, fourclass, german, splice, usps, a9a, mnist, acoustic, ijcnn1, covtype, sector, and news20, which are standard benchmark datasets.
Dataset Splits Yes In the training phase, we use five-fold cross validation to determine the initial learning rate ζ [1 : 9 : 100] and the bound on w, R 10[ 1:1:5] by a grid search.
Hardware Specification Yes Experiments were performed with running time reported based on a workstation with 12 nodes, each with an Intel Xeon E5-2620 2.0GHz CPU and 64GB RAM.
Software Dependencies No SOLAM was implemented in MATLAB, and MATLAB code of the compared methods were obtained from the authors of corresponding papers. No specific version numbers for MATLAB or other software dependencies are provided.
Experiment Setup No The paper mentions a grid search for learning rate and a bound on w, but does not provide specific hyperparameter values, training configurations, or system-level settings in detail.