OpenAUC: Towards AUC-Oriented Open-Set Recognition

Authors: Zitai Wang, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming Huang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, an end-to-end learning method is proposed to minimize the Open AUC risk, and the experimental results on popular benchmark datasets speak to its effectiveness.
Researcher Affiliation Collaboration Zitai Wang1,2 Qianqian Xu3 Zhiyong Yang4 Yuan He5 Xiaochun Cao6,1 Qingming Huang4,3,7,8 1 SKLOIS, Institute of Information Engineering, CAS 2 School of Cyber Security, University of Chinese Academy of Sciences 3 Key Lab. of Intelligent Information Processing, Institute of Computing Tech., CAS 4 School of Computer Science and Tech., University of Chinese Academy of Sciences 5 Alibaba Group 6 School of Cyber Science and Tech., Shenzhen Campus, Sun Yat-sen University 7 BDKM, University of Chinese Academy of Sciences 8 Peng Cheng Laboratory
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. Methodological steps are described in prose and mathematical formulations.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes Following the protocol in [13] and [14], the experiments are conducted on the following datasets: (1) MNIST1 [32], SVHN2 [33] and CIFAR103 [34]... (2) CIFAR+10 and CIFAR+50... (3) Tiny Image Net4 [35]... (4) Fine-grained datasets such as CUB5 [36]... 1http://yann.lecun.com/exdb/mnist/. Licensed GPL3. 2http://ufldl.stanford.edu/housenumbers/. Licensed GPL3. 3https://www.cs.toronto.edu/~kriz/cifar.html. Licensed MIT. 4http://cs231n.stanford.edu/tiny-imagenet-200.zip. Licensed MIT. 5https://www.vision.caltech.edu/datasets/cub_200_2011/. Licensed MIT.
Dataset Splits Yes The experiments are conducted on five different splits of each dataset, and we report the standard divation in Tab.2.
Hardware Specification No The main paper does not explicitly state specific GPU or CPU models, memory details, or detailed computer specifications. While the 'Ethics Statement' claims these details are provided, they are not present in the provided paper content.
Software Dependencies No The paper mentions software like PyTorch, NumPy, and Scikit-learn in its references, but it does not specify the exact version numbers of these or other ancillary software components used for the experiments in the provided text.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]