AUC Optimization with a Reject Option

Authors: Song-Qing Shen, Bin-Bin Yang, Wei Gao5684-5691

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally present extensive empirical studies to verify the effectiveness of the proposed algorithm.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology Nanjing University, Nanjing, 210023, China {shensq, yangbb, gaow}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 The AUCRO Algorithm
Open Source Code No The paper does not provide explicit statements or links for open-source code for the described methodology. Footnote 1 points to a third-party tool.
Open Datasets Yes We evaluate the performance of our method on ten benchmark datasets, as summarized in Table 11. 1https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/
Dataset Splits Yes Two trials of 5-fold cross-validation is executed on training sets to decide the learning rate ηt 2[ 12: 10] and the regularized parameter λ 2[ 10: 2] for our algorithm.
Hardware Specification No No specific hardware details (such as GPU or CPU models, or detailed system specifications) used for experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers) are explicitly stated in the paper.
Experiment Setup Yes Two trials of 5-fold cross-validation is executed on training sets to decide the learning rate ηt 2[ 12: 10] and the regularized parameter λ 2[ 10: 2] for our algorithm. For FSAUC, we tune the initial stepsize η1 2[ 10: 10] and the parameter R 10[ 1: 5], as recommended in (Liu et al. 2018). For OPAUC, stepsize ηt is decided within the range 2[ 12: 10] and the regularization parameter λ is decided within the range 2[ 10: 2] as recommended in (Gao et al. 2016). For OAMseq and OAMgd, the buffer sizes are fixed to be 100 and the penalty parameter C is decided within 2[ 10: 10] as recommended in (Zhao et al. 2011).