Long-Tailed Out-of-Distribution Detection via Normalized Outlier Distribution Adaptation

Authors: Wenjun Miao, Guansong Pang, Jin Zheng, Xiao Bai

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on three popular benchmarks for OOD detection in LTR show the superior performance of Adapt OD over state-of-the-art methods.
Researcher Affiliation Academia Wenjun Miao Beihang University, China miaowenjun@buaa.edu.cn; Guansong Pang Singapore Management University, Singapore gspang@smu.edu.sg; Jin Zheng Beihang University, China jinzheng@buaa.edu.cn; Xiao Bai Beihang University, China baixiao@buaa.edu.cn
Pseudocode Yes Algorithm 1 : Adapt OD Training
Open Source Code Yes Code is available at https://github.com/mala-lab/Adapt OD.
Open Datasets Yes Following [30, 40, 45], we use three popular long-tailed datasets CIFAR10-LT [3], CIFAR100-LT [3] and Image Net-LT [26] as ID data Xin. The default imbalance ratio is set to ρ = 100 on CIFAR10/100-LT. Tiny Images80M [38] is used as the outlier data Xaux_out for CIFAR10/100-LT and Image Net-Extra [40] is used as outlier data for Image Net-LT.
Dataset Splits Yes For ID datasets, the original version of CIFAR10 [10] and CIFAR100 [10] contains 50, 000 training images and 10, 000 validation images of size 32 32 with 10 and 100 classes, respectively. CIFAR10LT and CIFAR100-LT are the imbalanced version of them, which reduce the number of training examples per class and keep the validation set unchanged.
Hardware Specification Yes All experiences are performed with 8 NVIDIA RTX 3090.
Software Dependencies No The paper mentions 'SGD optimizer' and 'cosine annealing learning rate scheduler' but does not specify software versions for libraries like PyTorch, TensorFlow, or specific Python versions within the main text or appendices for reproducibility.
Experiment Setup Yes For experiments on CIFAR10-LT [3] and CIFAR100-LT [3], we pre-train our model based on Res Net18 [11] for 320 epochs with an initial learning rate 0.01 [1, 4] using only cross-entropy loss and fine-tune the linear classifier of this model for 20 epochs with an initial learning rate 0.001 [4, 24]. The batch size is 64 for ID data at the pre-training stage, 128 for ID data at the fine-tuning stage, and 256 for outlier data at the fine-tuning stage [4,30,40].