Out-of-distribution Detection with Implicit Outlier Transformation
Authors: Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye HAO, Bo Han
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments under various OOD detection setups, demonstrating the effectiveness of our method against its advanced counterparts. |
| Researcher Affiliation | Collaboration | 1Hong Kong Baptist University 2Huawei Noah s Ark Lab 3The University of Melbourne 4 Sydney AI Centre, The University of Sydney |
| Pseudocode | Yes | The overall DOE algorithm is summarized in Appendix B. |
| Open Source Code | Yes | The code is publicly available at: github.com/qizhouwang/doe. |
| Open Datasets | Yes | For the CIFAR benchmarks, we adopt the tiny Image Net dataset (Le & Yang, 2015) as the surrogate OOD dataset for training. For the Image Net dataset, we employ the Image Net-21K-P dataset (Ridnik et al., 2021)... the experiments are all conducted using public datasets. |
| Dataset Splits | Yes | Hyper-parameters are chosen based on the OOD detection performance on validation datasets, which are separated from ID and surrogate OOD data. |
| Hardware Specification | No | The paper mentions network architectures like 'WRN-40-2' and 'Res Net-50' but does not specify any particular GPU models, CPU types, or other hardware used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For the CIFAR benchmarks, we employ the WRN-40-2... trained for 200 epochs via empirical risk minimization, with a batch size 64, momentum 0.9, and initial learning rate 0.1... For the Image Net... Res Net-50... For the CIFAR benchmarks, DOE is run for 10 epochs with an initial learning rate of 0.01 and the cosine decay... The batch size is 128 for ID cases and 256 for OOD cases. The number of warm-up epochs is set to 5. λ is 1 and β is 0.6. For the Image Net dataset, DOE is run for 4 epochs with an initial learning rate of 0.0001 and cosine decay. The batch sizes are 64 for both ID and surrogate OOD cases. The number of warm-up epochs is 2. λ is 1 and β is 0.1. ...σ is uniformly sampled from {1e 1, 1e 2, 1e 3, 1e 4} in each training step. |