Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE
Authors: Yewen Li, Chaojie Wang, Xiaobo Xia, Tongliang Liu, xin miao, Bo An
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method can significantly outperform existing state-of-the-art unsupervised OOD detection approaches. 4 Experiments |
| Researcher Affiliation | Collaboration | 1Nanyang Technological University 2University of Sydney 3Amazon |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplemental material |
| Open Datasets | Yes | Fashion MNIST [39] (in) / MNIST [40] (out) and CIFAR10 [41] (in) / SVHN [42] (out)... we add KMNIST [43], not MNIST [44], Omniglot [45] and Small NORB [46] datasets; for CIFAR10/SVHN pair, we add Celeb A [47], Places365 [48], Flower102 [49] and LFWPeople [50] datasets. |
| Dataset Splits | No | The paper mentions "trained on the training split" and "evaluated on both the testing split" but does not explicitly provide details about a validation dataset split or its size/percentage for their own experiments. |
| Hardware Specification | Yes | All experiments are performed on a PC with an NVIDIA RTX 3090 GPU and the our code is implemented with Py Torch [53]. |
| Software Dependencies | Yes | The models are implemented in PyTorch 1.10.1. |
| Experiment Setup | Yes | For optimization, we adopt the same Adam optimizer [52] with a learning rate of 3e-4. We train all models in comparison by setting the batch size as 128 and the max epoch as 1000. |