Learn to Accumulate Evidence from All Training Samples: Theory and Practice

Authors: Deep Shankar Pandey, Qi Yu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments over many challenging real-world datasets and settings confirm our theoretical findings and demonstrate the effectiveness of our proposed approach.
Researcher Affiliation Academia Deep Pandey 1 Qi Yu 1 1Rochester Institute of Technology. Correspondence to: Qi Yu <qi.yu@rit.edu>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The source code for the experiments carried out in this work is attached in the supplementary materials and is available at the link: https://github.com/pandeydeep9/Evidential Research2023
Open Datasets Yes We consider the standard supervised classification problem with MNIST (Le Cun, 1998), Cifar10, and Cifar100 datasets (Krizhevsky et al., 2009), and few-shot classification with mini-Image Net dataset (Vinyals et al., 2016).
Dataset Splits No The paper mentions datasets like MNIST, Cifar10, and Cifar100 which have standard splits, but it does not explicitly provide specific details (percentages, sample counts, or explicit citations to predefined splits) for the validation set partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions "Adam optimizer (Kingma & Ba, 2014)" but does not specify version numbers for Adam or any other software libraries or dependencies.
Experiment Setup Yes MNIST models were trained with learning rate of 0.0001 and Adam optimizer (Kingma & Ba, 2014), and all remaining models were trained with learning rate of 0.1 and Stochastic Gradient Descent optimizer with momentum. ...MNIST models were trained...for 50 epochs, and Cifar10/Cifar100 models were trained on Resnet-18 based classifier (He et al., 2016) for 200 epochs.