How to Overcome Curse-of-Dimensionality for Out-of-Distribution Detection?
Authors: Soumya Suvra Ghosal, Yiyou Sun, Yixuan Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide comprehensive experiments and ablations to validate the efficacy of SNN. Compared to the current best distance-based method, SNN reduces the average FPR95 by 15.96% on the CIFAR-100 benchmark. |
| Researcher Affiliation | Academia | Department of Computer Sciences, University of Wisconsin Madison {sghosal, sunyiyou, sharonli}@cs.wisc.edu |
| Pseudocode | No | The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. It presents mathematical equations for its methodology but not structured pseudocode. |
| Open Source Code | Yes | Code is available at https://github.com/deeplearning-wisc/SNN. Our code is open-sourced for the research community. |
| Open Datasets | Yes | In this section, we make use of commonly studied CIFAR-10 (10 classes) and CIFAR-100 (100 classes) (Krizhevsky, Hinton et al. 2009) datasets as ID. We evaluate the methods on common OOD datasets: Textures (Cimpoi et al. 2014), SVHN (Netzer et al. 2011), LSUN-Crop (Yu et al. 2015), LSUN-Resize (Yu et al. 2015), i SUN (Xu et al. 2015), and Places365 (Zhou et al. 2017). In this section, we evaluate SNN on a more realistic high-resolution dataset Image Net (Deng et al. 2009). |
| Dataset Splits | Yes | We use the standard split with 50, 000 images for training and 10, 000 images for testing. We also chose the dimension of the subspace s and the number of nearest neighbors k based on a validation set. |
| Hardware Specification | Yes | Our system consists of one NVIDIA A100 GPU and 48GB of memory. |
| Software Dependencies | Yes | We use PyTorch version 1.10.1. |
| Experiment Setup | Yes | We train the Res Net-101 model for 100 epochs using a batch size of 256, starting from randomly initialized weights. We use SGD with a momentum of 0.9, and a weight decay of 1e-4. We set the initial learning rate as 0.1 and use a cosine-decay schedule. We set r = 0.35 and k = 200 based on our validation strategy described in Appendix C.4. |