RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
Authors: Yue Song, Nicu Sebe, Wei Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive ablation studies and comprehensive theoretical analyses are presented to support the empirical results. Our Rank Feat establishes the state-of-the-art performance on the large-scale Image Net benchmark and a suite of widely used OOD datasets across different network depths and architectures. |
| Researcher Affiliation | Academia | Yue Song1, Nicu Sebe1, and Wei Wang2 1Department of Information Engineering and Computer Science, University of Trento, Italy 2Beijing Jiaotong University, China |
| Pseudocode | No | The paper describes the Power Iteration algorithm in text within Section 2, but it does not present it in a formal pseudocode block or algorithm environment. |
| Open Source Code | Yes | Code is publicly available via https://github.com/King James Song/Rank Feat. |
| Open Datasets | Yes | Datasets. In line with [26, 52, 27], we mainly evaluate our method on the large-scale Image Net-1k benchmark [6]. The large-scale dataset is more challenging than the traditional CIFAR benchmark [36] because the images are more realistic and diverse (i.e., 1.28M images of 1, 000 classes). For the OOD datasets, we select four testsets from subsets of i Naturalist [58], SUN [63], Places [70], and Textures [5]. |
| Dataset Splits | No | The paper mentions using ImageNet-1k as the in-distribution (ID) training data for the model and other datasets as OOD test sets, but it does not specify explicit training/validation/test dataset splits (e.g., percentages or counts) for their experiments. It primarily evaluates OOD detection on pre-trained models. |
| Hardware Specification | No | The paper mentions 'Processing Time Per Image (ms)' and 'The test batch size is set as 16' but does not provide specific details about the hardware used, such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x). |
| Experiment Setup | Yes | Unless explicitly speciļ¬ed, we apply Rank Feat on the Block 4 feature by default. The main evaluation is done using Google Bi T-S model [35] pretrained on Image Net-1k with Res Netv2-101 [20]. We also evaluate the performance on Squeeze Net [29]... and on T2T-Vi T-24 [67]. Rank Feat performs the fusion at the logit space and computes the score function as log P exp((y +y )/2). The test batch size is set as 16. The approximate solution by PI yields competitive performances. (Table 7 shows results for PI with #100 iter, #50 iter, #20 iter, #10 iter, #5 iter) |