Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm

Authors: HuiYang Shao, Qianqian Xu, Zhiyong Yang, Shilong Bao, Qingming Huang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method. [...] We conduct extensive experiments on multiple imbalanced image classification tasks. The experimental results speak to the effectiveness of our proposed methods. [...] In Tab.2, Tab.3, we record the performance on test sets of all the methods on three subsets of CIFAR-10-LT, CIFAR-100-LT, and Tiny-Imagent-200-LT. Each method is tuned independently for OPAUC and TPAUC metrics.
Researcher Affiliation Academia Huiyang Shao1,2 Qianqian Xu1 Zhiyong Yang2 Shilong Bao3,4 Qingming Huang1,2,5,6 1 Key Lab of Intell. Info. Process., Inst. of Comput. Tech., CAS 2 School of Computer Science and Tech., University of Chinese Academy of Sciences 3 State Key Lab of Info. Security, Inst. of Info. Engineering, CAS 4 School of Cyber Security, University of Chinese Academy of Sciences 5 BDKM, University of Chinese Academy of Sciences 6 Peng Cheng Laboratory
Pseudocode Yes Algorithm 1 Accelerated Stochastic Gradient Descent Ascent Algorithm
Open Source Code Yes The source code is available in https://github.com/Shaocr/ PAUCI.
Open Datasets Yes We adopt three imbalanced binary classification datasets: CIFAR-10-LT [8], CIFAR-100-LT [19] and Tiny-Imgae Net-200-LT following the instructions in [39], where the binary datasets are constructed by selecting one super category as positive class and the other categories as negative class. Please see Appendix.E for more details.
Dataset Splits No The paper mentions 'test sets' and 'training convergence' but does not explicitly describe train/validation splits or percentages in the main body. Details might be in Appendix.E but are not explicitly stated in the provided text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications). It states in the ethics statement that 'the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)' was included, but this specific information is not present in the main text provided for analysis.
Software Dependencies No The paper mentions 'mindspore, which is a new AI computing framework' but does not specify any version numbers for this or any other software dependencies. It only lists general software, not with specific versions.
Experiment Setup No The paper states: 'Each method is tuned independently for OPAUC and TPAUC metrics.' and 'All algorithms use hyperparameters in the performance experiments.' It mentions 'learning parameters {ν, λ, k, m, c1, c2, T}' in Algorithm 1, but does not provide their specific values or other training configurations (e.g., batch size, epochs, specific optimizer settings) in the main text. Details are deferred to Appendix.E.