Stochastic Proximal Algorithms for AUC Maximization

Authors: Michael Natole, Yiming Ying, Siwei Lyu

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we develop a novel stochastic proximal algorithm for AUC maximization which is referred to as SPAM. Compared with the previous literature, our algorithm SPAM applies to a non-smooth penalty function, and achieves a convergence rate of O( log t / t ) for strongly convex functions while both space and per-iteration costs are of one datum.
Researcher Affiliation Academia 1Department of Mathematics and Statistics, SUNY at Albany, Albany, NY, USA 2Department of Computer Science, SUNY at Albany, Albany, NY, USA.
Pseudocode Yes Algorithm 1 Stochastic Proximal AUC Maximization (SPAM)
Open Source Code No The paper states that 'All experiments were conducted with MATLAB and the MATLAB codes for the compared methods were obtained from the authors.' but does not explicitly state that the code for their proposed method (SPAM) is open-source or publicly available.
Open Datasets Yes All of these datasets are available to download from the LIBSVM (Chang & Lin, 2011) and UCI machine learning repository (Frank & Asuncion, 2010).
Dataset Splits Yes We used 80% of the data for training and the remaining 20% for testing. The results are based on 20 runs for each dataset for which we used to calculate the average AUC score and standard deviation. To determine the proper parameters for each dataset, we conduct 5-fold cross validation on the training sets to determine the parameter β 10[ 5:5] for SPAM-L2 and β1 10[ 5:5] for SPAMNET.
Hardware Specification No The paper states 'All experiments were conducted with MATLAB' but does not provide any specific hardware details like GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions 'All experiments were conducted with MATLAB' but does not specify a version number for MATLAB or any other software dependencies with their versions.
Experiment Setup Yes To determine the proper parameters for each dataset, we conduct 5-fold cross validation on the training sets to determine the parameter β 10[ 5:5] for SPAM-L2 and β1 10[ 5:5] for SPAMNET.