AUC Optimization from Multiple Unlabeled Datasets

Authors: Zheng Xie, Yu Liu, Ming Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report the experimental results of the proposed Um-AUC, compared to state-of-the-art Um classification approaches.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China
Pseudocode Yes Algorithm 1 Um-AUC
Open Source Code No The paper does not provide an explicit statement about releasing source code or a direct link to a code repository.
Open Datasets Yes We tested the performance of Um-AUC using the benchmark datasets Kuzushiji-MNIST (K-MNIST for short) (Clanuwat et al. 2018), CIFAR-10, and CIFAR100 (Krizhevsky, Hinton et al. 2009)
Dataset Splits No The paper mentions 'training set' and 'test set' but does not explicitly provide details about a validation set or specific training/validation splits. It states: 'We train all models for 150 epochs, and report the AUC on the test set at the final epoch.'
Hardware Specification Yes Our implementation is based on Py Torch (Paszke et al. 2019), and experiments are conducted on an NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2019)' but does not specify a version number for it or any other software dependencies needed for replication.
Experiment Setup Yes We train all models for 150 epochs, and report the AUC on the test set at the final epoch. We used Adam (Kingma and Ba 2014) and cross-entropy loss for their optimization, following the standard implementation in the original paper.