Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate

Authors: Mingrui Liu, Xiaoxuan Zhang, Zaiyi Chen, Xiaoyu Wang, Tianbao Yang

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eight large-scale benchmark data sets demonstrate the superior performance of the proposed algorithm comparing with existing stochastic or online algorithms for AUC maximization.
Researcher Affiliation Collaboration 1Department of Computer Science, The University of Iowa, IA 52242, USA 2University of Science and Technology of China 3Intellifusion.
Pseudocode Yes Algorithm 1 FSAUC
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a repository for the methodology described.
Open Datasets Yes We use eight large-scale benchmark datasets from libsvm website, ranging from high-dimensional to lowdimensional, from balanced class distribution to imbalanced class distribution. The statistics of these datasets are summarized in Table 1. 3http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits Yes We randomly divide each dataset into three sets, respectively training, validation, and testing. For a9a and w8a datasets, we randomly split the given testing set into half validation and half testing. For the datasets that do not explicitly provide a testing set, we randomly split the entire data into 4:1:1 for training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes The involved parameters of each algorithm are tuned based on the validation data. FSAUC has two parameters R and G. R is decided within the range 10[ 1:1:5]. G af-fects the stepsize of each epoch (Algorithm 1 line 4-5). Since η1 = β0 3n0GR0 and ηk+1 = βk 2 βk 1 ηk, we equiv-alently tune η1 2[ 10:1:10]. As for SOLAM, following the same strategy in the original paper (Ying et al., 2016), we tune R in 10[ 1:1:5] and the learning rate in 2[ 10:1:10]. OPAUC has two versions... There are two parameters shared by both versions of OPAUC, step size η and regularized parameter λ. Following the suggestion of the original paper (Gao et al., 2013), we tune η 2[ 12:1: 4] and λ 2[ 10:1:0].