Towards Decision-Friendly AUC: Learning Multi-Classifier with AUCµ
Authors: Peifeng Gao, Qianqian Xu, Peisong Wen, Huiyang Shao, Yuan He, Qingming Huang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four benchmark datasets demonstrate the effectiveness of our proposed method in both AUCµ and F1metric. |
| Researcher Affiliation | Collaboration | Peifeng Gao 1, Qianqian Xu 2, *, Peisong Wen 1, 2 Huiyang Shao 1, 2, Yuan He 3, Qingming Huang 1, 2, 4, 5, 1 School of Computer Science and Tech., University of Chinese Academy of Sciences 2 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences 3 Alibaba Group 4 BDKM, University of Chinese Academy of Sciences 5 Peng Cheng Laboratory {gaopeifeng21, shaohuiyang21}@mails.ucas.ac.cn, {xuqianqian, wenpeisong20z}@ict.ac.cn heyuan.hy@alibaba-inc.com, qmhuang@ucas.ac.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | To demonstrate the effectiveness of our proposed framework, we conduct a series of experiments in four benchmark datasets for imbalanced multi-classification: CIFAR10, CIFAR100 (Krizhevsky 2012), Tiny Image Net (Russakovsky et al. 2015) and Image Net (Deng et al. 2009). |
| Dataset Splits | Yes | We keep the models with the highest AUCµ in the validation set and report the corresponding AUCµ and F1-metric on the test set. The training epochs are set to 25 for Image Net and 80 for other datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We utilize the Adam optimizer (Kingma and Ba 2017) for all methods. The initial learning rates are searched in [10 4, 10 3], and decays by 0.99 per epoch. We keep the models with the highest AUCµ in the validation set and report the corresponding AUCµ and F1-metric on the test set. The training epochs are set to 25 for Image Net and 80 for other datasets. |