Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection

Authors: Jingen Qu, Yufei Chen, Xiaodong Yue, Wei Fu, Qiguang Huang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments over many datasets demonstrate our proposed method outperforms existing OOD detection methods.
Researcher Affiliation Academia Jingen Qu School of Computer Science and Technology Tongji University, Shanghai, China newcity@tongji.edu.cn Yufei Chen School of Computer Science and Technology Tongji University, Shanghai, China yufeichen@tongji.edu.cn Xiaodong Yue Artificial Intelligence Institute Shanghai University, Shanghai, China. yswantfly@shu.edu.cn Wei Fu School of Computer Science and Technology Tongji University, Shanghai, China cs_fuwei@outlook.com Qiguang Huang School of Computer Science and Technology Tongji University, Shanghai, China 1753543@tongji.edu.cn
Pseudocode No The paper does not contain explicitly labeled "Pseudocode" or "Algorithm" blocks. It describes the method using mathematical equations and textual descriptions.
Open Source Code Yes Justification: The code of this paper is included in the supplementary material.
Open Datasets Yes In-distribution Datasets. We use the CIFAR-10[28], CIFAR-100[28], Flower-102[46] and CUB200-2011[65] as ID data. Out-of-distribution Datasets. For the OOD test datasets, we use three common benchmarks[69]: SVHN[44], Textures[6], Places365[74], that are used in Openood-benchmark[69].
Dataset Splits Yes We use the standard data split for all datasets, and the number of training epochs is 100, the initial learning rate is 0.0001 with Adam W[39], and the batch size is 128.
Hardware Specification Yes All experiments are implemented with Py Torch[50] and carried out with NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions "All experiments are implemented with Py Torch[50]" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes Implementation Details. We follow the experiment settings outlined in Open OOD[69]. We use Res Net-18[15] for CIFAR-10 and CIFAR-100. For more intricate datasets that are not included in Open OOD[69], such as fine-grained datasets Flower-102 and CUB-200-2011, we employ Res Net34[15] for enhanced feature representation. All experiments are implemented with Py Torch[50] and carried out with NVIDIA Ge Force RTX 3090 GPU. We use the standard data split for all datasets, and the number of training epochs is 100, the initial learning rate is 0.0001 with Adam W[39], and the batch size is 128. At test time, all images are resized to 224 224. For HEDL model, we first train the feature extractor with softmax layer for 90 epochs and then train in HEDL framework for 10 epochs. HEDL does not introduce any additional hyperparameters, thereby eliminating the need for extensive hyperparameter tuning, and Wprior is set to 1 for HEDL.