A Data-Driven Measure of Relative Uncertainty for Misclassification Detection

Authors: Eduardo Dadalto Câmara Gomes, Marco Romanelli, Georg Pichler, Pablo Piantanida

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods.
Researcher Affiliation Academia Eduardo Dadalto Laboratoire des signaux et systèmes (L2S) Université Paris-Saclay CNRS Centrale Supélec Gif-sur-Yvette, France; Marco Romanelli New York University New York, NY, USA; Georg Pichler Institute of Telecommunications TU Wien 1040 Vienna, Austria; Pablo Piantanida International Laboratory on Learning Systems (ILLS) Quebec AI Institute (MILA) CNRS Centrale Supélec Université Paris-Saclay Montreal, Canada
Pseudocode Yes Algorithm 1 Offline relative uncertainty matrix computation.
Open Source Code No The paper does not explicitly state that source code for the described methodology is available, nor does it provide a link to a repository or supplementary materials for code access.
Open Datasets Yes Table 1 showcases the misclassification detection performance... trained on different datasets (CIFAR-10, CIFAR-100 (Krizhevsky, 2009)).
Dataset Splits Yes We split the test set into two sets: one portion for tuning the detector (held out validation set) and the other for evaluating it.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions general software components like
Experiment Setup Yes For our method, we tuned the best lambda parameter (λ), T, and ϵ.