OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification
Authors: Taewon Jeong, Heeyoung Kim
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run the experiments for few-shot OOD detection and classification tasks with OOD-MAML. In the meta-training phase, we set the 5-shot data of one class in Dtrain and set 50 samples in Dtest... Under these settings, we evaluated the performance of OOD-MAML by implementing OOD detection and classification in experiments, and compared the obtained results with the performances of several OOD detection methods. |
| Researcher Affiliation | Academia | Taewon Jeong Heeyoung Kim Department of Industrial and Systems Engineering KAIST Daejeon 34141, Republic of Korea {papilion89,heeyoungkim}@kaist.ac.kr |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code for OOD-MAML is available at https://github.com/twj-KAIST/OOD-MAML. |
| Open Datasets | Yes | We ran experiments on Omniglot [14], CIFAR-FS [2], and mini Image Net [24], which are popular benchmark datasets used for few-shot learning. |
| Dataset Splits | No | The paper describes task-specific data usage for meta-training and meta-testing (e.g., 'In the meta-training phase, we set the 5-shot data of one class in Dtrain and set 50 samples in Dtest...') but does not provide explicit train/validation/test dataset splits for a single, overall dataset. The word 'validation' is used once in the context of validating adaptation, not a dataset split. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies, libraries, or frameworks used in the experiments. |
| Experiment Setup | No | The paper mentions general aspects of the experimental setup such as '5-shot' and 'N-way' settings, and CNN architecture details like '64 filters for Omniglot and CIFAR-FS, and 32 filters for mini Image Net'. It also mentions learning rates (α, β, βfake). However, it defers specific hyperparameter values to supplementary material: 'Details about hyperparameters are described in Supplementary material.' |