Out-of-Distribution Detection with Negative Prompts
Authors: Jun Nie, Yonggang Zhang, Zhen Fang, Tongliang Liu, Bo Han, Xinmei Tian
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on diverse OOD detection benchmarks, showing the effectiveness of our proposed method. |
| Researcher Affiliation | Collaboration | Jun Nie1 Yonggang Zhang2 Zhen Fang3 Tongliang Liu4 Bo Han2 Xinmei Tian1,5 1University of Science and Technology of China 2TMLR Group, Hong Kong Baptist University 3University of Technology Sydney 4Sydney AI Centre, The University of Sydney 5Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | Yes | Our method is fully documented in Section 3.2 with the pseudo algorithm detailed in Algorithm 1. |
| Open Source Code | Yes | Code is available at https://github.com/junz-debug/lsn. |
| Open Datasets | Yes | We use publicly available datasets, which are described in detail in Section 4.2 and Appendix C. |
| Dataset Splits | Yes | For CIFAR10, 6 classes are selected as ID classes, and the 4 remaining classes are selected as OOD classes. Experiments are conducted on 5 different splits. For CIFAR+10, 4 non-animal classes in CIFAR10 are selected as ID classes, and 10 animal classes in CIFAR100 are selected as OOD classes. |
| Hardware Specification | Yes | We use Python 3.8.16 and Pytorch 1.12.1 and several NVIDIA Ge Force RTX 2080, 3090, and 4090 GPUs. |
| Software Dependencies | Yes | We use Python 3.8.16 and Pytorch 1.12.1 and several NVIDIA Ge Force RTX 2080, 3090, and 4090 GPUs. |
| Experiment Setup | Yes | The initial learning rate is set to 1e-3. For each class, we learn three sets of negative prompts, and λ is set to 0.1. |