Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation
Authors: Pei Liu, Luping Ji
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments empirically demonstrate that our MIREL not only could often make existing MIL networks perform better in MIUE, but also could surpass representative UE methods by large margins, especially in instancelevel UE tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. Correspondence to: Luping Ji <jiluping@uestc.edu.cn>. |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/liupei101/MIREL. |
| Open Datasets | Yes | Two bag datasets are MNIST-bags (Le Cun, 1998) and CIFAR10-bags (Krizhevsky et al., 2009), following Ilse et al. (2018) to generate bags for MIL. ... One pathology dataset is CAMELYON16 (Bejnordi et al., 2017) for breast cancer metastasis detection. |
| Dataset Splits | Yes | there are 500, 100, and 1000 generated MNIST bags in training, validation, and test set, respectively. ... We generate 6000, 1000 and 2000 CIFAR bags for training, validation, and test, respectively. ... We leave 15% training samples as a validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam' optimizer and 'Dropout layers' but does not specify their version numbers or list other key software dependencies with explicit version information. |
| Experiment Setup | Yes | Learning rate, by default, is set to 0.0001 and it decays by a factor of 0.5 when the criterion on validation set does not decrease within 10 epochs. The other default settings are as follows: an epoch number of 200, a batch size of 1 (bag), a gradient accumulation step of 8, and an optimizer of Adam with a weight decay rate of 1 10 5. |