What If the Input is Expanded in OOD Detection?
Authors: Boxuan Zhang, Jianing Zhu, Zengmao Wang, Tongliang Liu, Bo Du, Bo Han
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and analyses have been conducted to understand and verify the effectiveness of Co Ver. |
| Researcher Affiliation | Academia | 1School of Computer Science, Wuhan University 2TMLR Group, Department of Computer Science, Hong Kong Baptist University 3Sydney AI Center, The University of Sydney 4RIKEN Center for Advanced Intelligence Project |
| Pseudocode | No | The paper does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is publicly available at: https://github.com/tmlr-group/Co Ver. |
| Open Datasets | Yes | Following previous work [1, 31], we adopt the Image Net-1K OOD benchmark [24], which uses the Image Net-1K [14] as ID data and i Naturalist [49], SUN [55], Places [60], and Textures [7] as OOD data. |
| Dataset Splits | Yes | To select the most effective corruption types for each method, we use SVHN [37] as the validation set. |
| Hardware Specification | Yes | All experiments are conducted on NVIDIA Ge Force RTX 3090 GPUs with Python 3.10 and Py Torch 2.2. |
| Software Dependencies | Yes | All experiments are conducted on NVIDIA Ge Force RTX 3090 GPUs with Python 3.10 and Py Torch 2.2. |
| Experiment Setup | Yes | By default, we use the Co Ver score in the max-softmax form and set τ = 1 as the temperature. |