Learning to Shape In-distribution Feature Space for Out-of-distribution Detection

Authors: Yonggang Zhang, Jie Lu, Bo Peng, Zhen Fang, Yiu-ming Cheung

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations across mainstream OOD detection benchmarks empirically manifest the superiority of the proposed DRL over its advanced counterparts.
Researcher Affiliation Academia 1Hong Kong Baptist University 2Australian Artificial Intelligence Institute, University of Technology Sydney
Pseudocode No The paper describes its algorithm steps in prose within the 'DRL as Expectation-Maximization' section but does not include a formally labeled 'Algorithm' or 'Pseudocode' block.
Open Source Code No We will release our code upon acceptance.
Open Datasets Yes Following the setup in [46, 40], we consider CIFAR-10 [30] and CIFAR-100 [30] as ID datasets and train Res Net-18 [20] and Res Net-34 [20] on them respectively.
Dataset Splits No The paper describes training and testing procedures but does not explicitly mention or detail a specific 'validation' data split for hyperparameter tuning or early stopping.
Hardware Specification Yes We perform all experiments on an NVIDIA A100 GPU using Pytorch.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number or list other software dependencies with their versions.
Experiment Setup Yes We train the model using stochastic gradient descent with momentum 0.9, and weight decay 10 4 for 500 epochs. The initial learning rate is 0.5 with cosine scheduling and the batch size is 512.