Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
Authors: Koby Bibas, Meir Feder, Tal Hassner
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluate our approach on 74 OOD detection benchmarks using Dense Net-100, Res Net-34, and Wide Res Net40 models trained with CIFAR-100, CIFAR-10, SVHN, and Image Net-30 showing a significant improvement of up to 15.6% over recent leading methods. |
| Researcher Affiliation | Collaboration | Koby Bibas School of Electrical Engineering Tel Aviv University kobybibas@gmail.com Meir Feder School of Electrical Engineering Tel Aviv University meir@eng.tau.ac.il Tal Hassner Facebook AI talhassner@gmail.com |
| Pseudocode | No | The paper describes methods using mathematical equations and prose, but it does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | Code is available in https://github.com/kobybibas/pnml_ood_detection |
| Open Datasets | Yes | For datasets that represent known classes, we use CIFAR-100, CIFAR-10 (Krizhevsky et al., 2014) and SVHN (Netzer et al., 2011). [...] In addition, to evaluate higher resolution images, we use Image Net-30 set (Hendrycks et al., 2019b). |
| Dataset Splits | No | The paper mentions using standard datasets like CIFAR-100, CIFAR-10, SVHN, Image Net-30 for IND sets and Tiny Image Net, LSUN, i SUN, Uniform noise, and Gaussian noise for OOD sets, but it does not specify the exact training, validation, and test splits used for these datasets within the paper. |
| Hardware Specification | Yes | We ran all experiments on NVIDIA K80 GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for ancillary software components, only mentioning the use of various models and datasets without detailing the software environment (e.g., specific library versions). |
| Experiment Setup | No | The paper mentions using pretrained models (Res Net-34, Dense Net-BC-100, Wide Res Net-40) and the datasets they were trained with (CIFAR-100, CIFAR-10, SVHN), but it does not provide specific hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer settings for its own experimental setup. |