LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Authors: Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the large-scale Image Net OOD detection benchmarks demonstrate the superiority of our Lo Co Op over zero-shot, fully supervised detection methods and prompt learning methods. |
| Researcher Affiliation | Collaboration | Atsuyuki Miyai1 Qing Yu1,2 Go Irie3 Kiyoharu Aizawa1 1The University of Tokyo 2LY Corporation 3Tokyo University of Science |
| Pseudocode | No | The paper includes diagrams and descriptions of the method but no formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available via https: //github.com/Atsu Miyai/Lo Co Op. |
| Open Datasets | Yes | Datasets. We use the Image Net-1K dataset [5] as the ID data. For OOD datasets, we adopt the same ones as in [18], including subsets of i Naturalist [47], SUN [51], Places [58], and TEXTURE [4]. |
| Dataset Splits | Yes | For the few-shot training, we follow the few-shot evaluation protocol adopted in CLIP [37] and Co Op [60], using 1, 2, 4, 8, and 16 shots for training, respectively, and deploying models in the full test sets. |
| Hardware Specification | Yes | We use a single Nvidia A100 GPU for all experiments. |
| Software Dependencies | No | The paper mentions using 'CLIP-Vi T-B/16 models' and refers to CLIP, but it does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Other hyperparameters (e.g., training epoch=50, learning rate=0.002, batch size=32, and token lengths N=16) are the same as those of Co Op [60]. |