Exploiting Discrepancy in Feature Statistic for Out-of-Distribution Detection

Authors: Xiaoyuan Guan, Jiankang Chen, Shenshen Bu, Yuren Zhou, Wei-Shi Zheng, Ruixuan Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluations demonstrate that, when combined with a strong baseline, our method can achieve state-of-the-art performance on several OOD detection benchmarks.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2Key Laboratory of Machine Intelligence and Advanced Computing, MOE, Guangzhou, China 3Peng Cheng Laboratory, Shenzhen, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Source code address: https://github.com/SYSU-MIA-GROUP/statistical discrepancy ood.
Open Datasets Yes For the Image Net1K benchmark, we use the Image Net1K (Deng et al. 2009) as ID set and use four datasets i Naturalist (Van Horn et al. 2018), SUN (Xiao et al. 2010), Places (Zhou et al. 2017), and Textures (Cimpoi et al. 2014) as the OOD sets. The CIFAR benchmarks respectively use CIFAR10 and CIFAR100 (Krizhevsky and Hinton 2009) as ID sets, and both use six datasets SVHN (Netzer et al. 2011), LSUN-Crop (Yu et al. 2015), LSUN-Resize (Yu et al. 2015), i SUN (Xu et al. 2015), Textures (Cimpoi et al. 2014), and Places365 (Zhou et al. 2017) as the OOD sets.
Dataset Splits No The paper specifies training and testing procedures but does not explicitly mention a separate validation set or split for hyperparameter tuning or model selection.
Hardware Specification Yes All experiments were run on NVIDIA Ge Force RTX 2080ti GPUs.
Software Dependencies No The paper mentions software components like 'stochastic gradient descent optimizer' and models like 'Res Net50' and 'Mobile Net-v2', but does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes To train each classifier, we used the stochastic gradient descent optimizer with momentum (0.9) and weight decay (0.0005) for up to 200 epochs on the CIFAR datasets, with a batch size of 128. The initial learning rate was set to 0.1, and it was decayed by a factor of 10 at the 100th and 150th epoch on CIFAR.