Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

Authors: Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and analysis demonstrate the effectiveness of our method.
Researcher Affiliation Academia 1Department of Computer Science, Hong Kong Baptist University 2CMIC, Shanghai Jiao Tong University 3Shanghai AI Laboratory 4Mohamed bin Zayed University of Artificial Intelligence 5Sydney AI Centre, The University of Sydney.
Pseudocode Yes We present the algorithms of UM (in Algorithm 1) and UMAP (in Algorithm 2) in Appendix F.
Open Source Code Yes The code is available at: https://github.com/ tmlr-group/Unleashing-Mask.
Open Datasets Yes CIFAR-10, CIFAR-100 (Krizhevsky, 2009) as our major ID datasets, and we also adopt Image Net (Deng et al., 2009) for performance exploration.
Dataset Splits Yes To choose the parameters of the estimated loss constraint, we use the Tiny Image Net (Tavanaei, 2020) dataset as the validation set
Hardware Specification Yes All experiments are conducted with multiple runs on NVIDIA Tesla V100-SXM2-32GB GPUs with Python 3.6 and Py Torch 1.8.
Software Dependencies Yes All experiments are conducted with multiple runs on NVIDIA Tesla V100-SXM2-32GB GPUs with Python 3.6 and Py Torch 1.8.
Experiment Setup Yes We conduct all major experiments on Dense Net-101 (Huang et al., 2017) with training epochs fixed to 100. The models are trained using stochastic gradient descent (Kiefer & Wolfowitz, 1952) with Nesterov momentum (Duchi et al., 2011). We adopt Cosine Annealing (Loshchilov & Hutter, 2017) to schedule the learning rate which begins at 0.1. We set the momentum and weight decay to be 0.9 and 10 4 respectively throughout all experiments. The size of the mini-batch is 256 for both ID samples (during training and testing) and OOD samples (during testing).