Density-driven Regularization for Out-of-distribution Detection
Authors: Wenjian Huang, Hao Wang, Jiahao Xia, Chengyan Wang, Jianguo Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To the best of our knowledge, we have conducted the most extensive evaluations and comparisons on computer vision benchmarks. The results show that our method significantly outperforms state-of-the-art detectors, and even achieves comparable or better performance than methods utilizing additional large-scale outlier exposure datasets. |
| Researcher Affiliation | Academia | Wenjian Huang 1, Hao Wang 1, Jiahao Xia2, Chengyan Wang3, Jianguo Zhang4,5 1Dept. of Computer Science and Engineering, Southern University of Science and Technology 2Faculty of Engineering and IT, University of Technology Sydney 3Human Phenome Institute, Fudan University 4Research Institute of Trustworthy Autonomous Systems and Dept. of Computer Science and Engineering, Southern University of Science and Technology 5Peng Cheng Lab, Shenzhen, China |
| Pseudocode | Yes | Figure 1 shows the overview of the proposed density driven regularization (DDR) for OOD detection and Algorithm 1 presents the pseudo-code for the training of our DDR method. |
| Open Source Code | Yes | Code for the proposed DDR is available at http://Wenjian Huang93.github.io/files/OOD_DDR.zip |
| Open Datasets | Yes | Following existing literature [1, 3, 19, 32, 4, 8, 10, 9, 17, 16], we train OOD models on the training set of CIFAR-10 [37], CIFAR-100 [37] and Image Net [38], respectively, and evaluate OOD detectors in identifying OOD test samples from other datasets, including i SUN [39], i Naturalist [40], LSUN(R) [17], LSUN(C) [17], Gaussian [17], Places365 [41], Textures [42] and SVHN [43]. |
| Dataset Splits | No | The paper specifies training and test sets but does not explicitly mention or quantify a separate validation split or dataset used for hyperparameter tuning or early stopping during training. |
| Hardware Specification | Yes | All experiments run on the Py Torch framework [50] with Nvidia A100 and Ge Force RTX 3090 GPUs, with a maximum RAM of 512GB. |
| Software Dependencies | No | The paper mentions 'Py Torch framework [50]' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | When training our OOD detector, we use the same normalization, augmentation and training setting as in [10, 4], where the learning rate was adjusted by cosine decay schedule [48] and the fine-tuning epoch is 10. The initial learning rates are set 10 3 and 10 4 for CIFAR and Image Net experiments. Other hyperparameters are set as follows: batch size of 256, regularization weight γ=10 2, constan r=10, and fixed statistical significance level α=0.05. |