NECO: NEural Collapse Based Out-of-distribution detection

Authors: Mouïn Ben Ammar, Nacim Belkhir, Sebastian Popescu, Antoine Manzanera, Gianni Franchi

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments demonstrate that NECO achieves state-of-the-art results on both small and large-scale OOD detection tasks while exhibiting strong generalization capabilities across different network architectures.
Researcher Affiliation Collaboration Mouïn Ben Ammar , , , Nacim Belkhir , Sebastian Popescu , Antoine Manzanera , Gianni Franchi U2IS Lab ENSTA Paris , Palaiseau, FRANCE Safran Tech , Chateaufort 78117, FRANCE {first.last}@enstaparis.com , safrangroup.com
Pseudocode Yes Algorithm 1 presents the pseudo-code of the process utilised to compute NECO during inference. This assumes that a PCA is already computed on the training data, the DNN is trained and a threshold on the score is already identified. ... (Algorithm 1 is provided on page 18).
Open Source Code Yes Code is available at https://gitlab.com/drti/neco.
Open Datasets Yes For experiments involving Image Net-1K as the inliers dataset (ID), we assess the model s performance on five OOD benchmark datasets: Textures (Cimpoi et al., 2014), Places365 (Zhou et al., 2016), i Naturalist (Horn et al., 2017), a subset of 10 000 images sourced from (Huang & Li, 2021a), Image Net-O (Hendrycks et al., 2021b) and SUN (Xiao et al., 2010). For experiments where CIFAR-10 (resp. CIFAR-100) serves as the ID dataset, we employ CIFAR-100 (resp. CIFAR10) alongside the SVHN dataset (Netzer et al., 2011) as OOD datasets in these experiments.
Dataset Splits No The paper states 'The standard dataset splits, featuring 50 000 training images and 10 000 test images, are used in these evaluations.' and refers to a 'threshold selected after the validation with the ROC Curve'. While a validation step is implied, explicit numerical details or percentages for a separate validation split are not provided in the text.
Hardware Specification No The paper does not provide specific details about the hardware used, such as CPU or GPU models, memory specifications, or cloud computing instances.
Software Dependencies No The paper does not specify the version numbers for any software dependencies, libraries, or frameworks used in the experiments (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The fine-tuning setup for the Vi T model is as follows: ... For Image Net-1K, the weights are fine-tuned for 18 000 steps, with 500 cosine warmup steps, 256 batch size, 0.9 momentum, and an initial learning rate of 2x10 2. For CIFAR-10 and CIFAR-100 the weights are fine-tuned for 500 and 1000 steps respectively. With 100 warm-up steps, 512 batch size, and the rest of the training parameters being equal to the case of Image Net1K. ... For both CIFAR-10 and CIFAR-100, the model is trained for 200 epochs with 128 batch size, 5x10 4 weight decay and 0.9 momentum. The initial learning rate is 0.1 with a cosine annealing scheduler.