Delving into Out-of-Distribution Detection with Vision-Language Representations
Authors: Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, Yixuan Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. |
| Researcher Affiliation | Collaboration | Yifei Ming1 Ziyang Cai1 Jiuxiang Gu2 Yiyou Sun1 Wei Li3 Yixuan Li1 1Department of Computer Sciences, University of Wisconsin-Madison 2Adobe 3Google Research |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ deeplearning-wisc/MCM. |
| Open Datasets | Yes | We consider the following ID datasets: CUB-200 [80], STANFORD-CARS [39], FOOD-101 [6], OXFORD-PET [57] and variants of IMAGENET [11]. |
| Dataset Splits | No | The paper states 'λ is chosen so that a high fraction of ID data (e.g., 95%) is above the threshold' which describes a validation-like process for threshold selection, but it does not specify a formal dataset split (e.g., percentages or sample counts) for this validation. |
| Hardware Specification | No | The paper mentions models like CLIP-B/16 and Vi T-B/16, but does not specify the hardware (e.g., GPU/CPU models, memory) used to run the experiments within the provided text. |
| Software Dependencies | No | The paper mentions using 'CLIP' and 'Transformer' models but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Unless specified otherwise, the temperature is 1 for all experiments. |