Towards Understanding Generalization of Macro-AUC in Multi-label Learning

Authors: Guoqiang Wu, Chongxuan Li, Yilong Yin

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As a theoretical work, the primary goal of experiments is to verify our theory findings rather than illustrate the superior performance of the proposed method. Therefore, we evaluate the aforementioned three learning algorithms in Section 3.2 in terms of Macro-AUC on 10 widely-used benchmark datasets with various domains and sizes of labels and data. The detailed statistics of the datasets are summarized in Table 2
Researcher Affiliation Academia 1School of Software, Shandong University 2Gaoling School of AI, Renmin University of China; Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/GuoqiangWoodrowWu/Macro-AUC-Theory
Open Datasets Yes These datasets can be downloaded from http://mulan.sourceforge.net/datasets-mlc.html and http://palm.seu.edu.cn/zhangml/.
Dataset Splits Yes Moreover, we search the hyperparameter λ for all algorithms on all datasets in a wide range of {10-6,10-5,...,102} using 3-fold cross-validation.
Hardware Specification Yes means that Apa takes more than one week by using a 16-core CPU server on the corresponding datasets.
Software Dependencies No The paper mentions using 'linear models with the base logistic loss' and 'efficient stochastic optimization algorithm (i.e., SVRG-BB)', but does not provide specific version numbers for any software libraries, frameworks, or languages (e.g., Python, PyTorch, scikit-learn versions).
Experiment Setup Yes Moreover, we search the hyperparameter λ for all algorithms on all datasets in a wide range of {10-6,10-5,...,102} using 3-fold cross-validation.