Fast Decision Boundary based Out-of-Distribution Detector

Authors: Litian Liu, Yao Qin

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method matches or surpasses the effectiveness of state-of-the-art methods in extensive experiments while incurring negligible overhead in inference latency. Overall, our approach significantly improves the efficiency-effectiveness trade-off in OOD detection. ... Experimental analysis: In Section 4, we demonstrate across extensive experiments that f DBD achieves or surpasses the state-of-the-art OOD detection effectiveness with negligible latency overhead.
Researcher Affiliation Academia 1MIT 2UC Santa Barbara. Correspondence to: Litian Liu <litianl@mit.edu>, Yao Qin <yaoqin@ucsb.edu>.
Pseudocode No The paper describes the proposed method and provides theoretical proofs but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https: //github.com/litianliu/f DBD-OOD.
Open Datasets Yes On the CIFAR-10 OOD benchmark, we use the standard CIFAR-10 test set with 10,000 images as ID test samples. For OOD samples, we consider common OOD benchmarks: SVHN (Netzer et al., 2011), i SUN (Xu et al., 2015), Places365 (Zhou et al., 2017), and Texture (Cimpoi et al., 2014).
Dataset Splits Yes Datasets On the CIFAR-10 OOD benchmark, we use the standard CIFAR-10 test set with 10,000 images as ID test samples. ... We use 50,000 Image Net validation images in the standard split as ID test samples.
Hardware Specification Yes In particular, on a Tesla T4 GPU, the average inference time on the CIFAR-10 classifier is 0.53ms per image with or without computing the distance using our method. ... On a Tesla T4 GPU, estimating the distance using CW attack takes 992.2ms per image per class.
Software Dependencies No The paper mentions using 'Pytorch' and refers to a 'training recipe' link for Res Net-50 models, but it does not explicitly state specific version numbers for PyTorch or any other software libraries used.
Experiment Setup Yes Res Net-18 w/ Cross Entropy Loss... The classifier is trained for 100 epochs, with a start learning rate 0.1 decaying to 0.01, 0.001, and 0.0001 at epochs 50, 75, and 90 respectively. ... Res Net-18 w/ Contrastive Loss... the model is trained with for 500 epochs with batch size 1024. The temperature is set to 0.1. The cosine learning rate (Loshchilov & Hutter, 2016) starts at 0.5 is used.