ReAct: Out-of-distribution Detection With Rectified Activations

Authors: Yiyou Sun, Chuan Guo, Yixuan Li

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive evaluations and establish state-of-the-art performance on a suite of common OOD detection benchmarks, including CIFAR-10 and CIFAR-100, as well as a large-scale Image Net dataset [7]. Re Act outperforms the best baseline by a large margin, reducing the average FPR95 by up to 25.05%.
Researcher Affiliation Collaboration Yiyou Sun Department of Computer Sciences University of Wisconsin-Madison sunyiyou@cs.wisc.edu Chuan Guo Facebook AI Research chuanguo@fb.com Yixuan Li Department of Computer Sciences University of Wisconsin-Madison sharonli@cs.wisc.edu
Pseudocode No The paper describes the Re Act operation mathematically (equations 1 and 2) but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/deeplearning-wisc/react.git
Open Datasets Yes We use a pre-trained Res Net-50 model [12] for Image Net-1k. ... We evaluate on CIFAR-10 and CIFAR-100 [27] datasets as in-distribution data, using the standard split with 50,000 training images and 10,000 test images.
Dataset Splits No The paper mentions 'We use a validation set of Gaussian noise images' for selecting the parameter 'p', but does not provide specific dataset split information (percentages, sample counts) for the main in-distribution datasets (ImageNet, CIFAR-10/100) to create a validation set.
Hardware Specification No The paper states 'All experiments are based on the hardware described in Appendix D.' However, Appendix D is not provided in the given text, thus specific hardware details are not available in the main body.
Software Dependencies No The paper mentions models like ResNet-50 and MobileNet-v2, and concepts like Batch Norm, Weight Norm, and Group Norm, but it does not specify any software dependencies (e.g., libraries, frameworks) with version numbers needed for replication.
Experiment Setup Yes We select p from {10, 65, 80, 85, 90, 95, 99} based on the FPR95 performance. The optimal p is 90. ... For both CIFAR-10 and CIFAR-100, the models are trained for 100 epochs. The start learning rate is 0.1 and decays by a factor of 10 at epochs 50, 75, and 90.