R-EDL: Relaxing Nonessential Settings of Evidential Deep Learning
Authors: Mengyuan Chen, Junyu Gao, Changsheng Xu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and SOTA performances demonstrate the effectiveness of our method. Source codes are provided in Appendix E. 1 INTRODUCTION ... Extensive experiments on multiple benchmarks for uncertainty estimation tasks, including confidence estimation and out-of-distribution detection, which comprehensively demonstrate the effectiveness of our proposed R-EDL with remarkable performances under the classical, few-shot, noisy, and video-modality settings. |
| Researcher Affiliation | Academia | Mengyuan Chen1,2, Junyu Gao1,2, Changsheng Xu1,2,3 1State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences (CASIA) 2School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS) 3Peng Cheng Laboratory, Shen Zhen, China |
| Pseudocode | No | The paper describes mathematical derivations and steps but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source codes are provided in Appendix E. ... Our code is available at https://github.com/Mengyuan Chen21/ICLR2024-REDL. |
| Open Datasets | Yes | The MNIST (Le Cun, 1998) database consists of handwritten digits ranging from 0 to 9. ... CIFAR-10 (Krizhevsky et al., 2009) comprises 60,000 32x32 color distributed across 10 distinct classes... mini-Image Net (Vinyals et al., 2016) is designed for few-shot learning evaluation. ... UCF-101 (Soomro et al., 2012) is an action recognition data set of realistic action videos, collected from YouTube. |
| Dataset Splits | Yes | The MNIST (Le Cun, 1998) database... We use the proportion of [0.8, 0.2] to partition the training samples into training and validation sets. ... CIFAR-10 (Krizhevsky et al., 2009)... We partition the training images into training and validation sets using a split ratio of [0.95, 0.05]. |
| Hardware Specification | Yes | All experiments are conducted on NVIDIA RTX 3090 GPUs. |
| Software Dependencies | Yes | Our model is implemented with Python 3.8 and Py Torch 1.12. |
| Experiment Setup | Yes | The Adam optimizer is employed with a learning rate of 1e-3, decaying by 0.1 every 15 epochs for MNIST, and a learning rate of 1e-4 for CIFAR-10. The hyper-parameter λ is set to 0.1... The batch size is set to 64, and the maximum training epoch is set to 60 and 200 for MNIST and CIFAR-10, respectively. ... The LBFGS optimizer is employed with the default learning rate 1.0 for 100 epochs. |