Conformalized Credal Set Predictors

Authors: Alireza Javanmardi, David Stutz, Eyke Hüllermeier

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we focus on experiments on two real-world datasets. Further information on these datasets and details about the learning models can be found in Appendix C. Additional experiments on synthetic data, including an illustrative example showing how the resulting credal sets change as epistemic uncertainty decreases, and experiments on the impact of imprecise first-order data, are provided in Appendix E.
Researcher Affiliation Academia Alireza Javanmardi LMU Munich, MCML Munich, Germany alireza.javanmardi@ifi.lmu.de David Stutz Max Planck Institute for Informatics Saarbrücken, Germany david.stutz@mpi-inf.mpg.de Eyke Hüllermeier LMU Munich, MCML Munich, Germany eyke@ifi.lmu.de
Pseudocode Yes Algorithm 1 Conformal Credal Set Prediction
Open Source Code Yes All implementations and experiments can be found on our Git Hub repository.2 The link to the code: https://github.com/alireza-javanmardi/conformal-credal-sets
Open Datasets Yes Chaos NLI [31] (License: CC BY-NC 4.0 DEED) is an English Natural Language Inference (NLI) dataset... CIFAR-10H [35] (License: CC BY-NC-SA 4.0 DEED) is a dataset of soft labels that capture human perceptual uncertainty for the 10000 images of CIFAR-10 test set [28].
Dataset Splits Yes To split the data, we randomly select 500 instances for calibration, 500 for testing, and the remaining for training.
Hardware Specification Yes For all experiments, we used an Intel(R) Core(TM) i7-11800H CPU with 16.0 GB of RAM.
Software Dependencies No The paper mentions using 'Hugging Face transformers library' and 'Adam optimizer', and references pre-trained models. However, it does not provide specific version numbers for these software libraries or other key dependencies required for reproduction.
Experiment Setup Yes As for the learner, we employ a deep neural network consisting of three hidden layers with 256, 64, and 16 units, utilizing Re LU as the activation function. Prior to the output layer, a dropout layer with a rate of 0.3 is incorporated. The same model architecture serves both firstand second-order predictors, differing only in the activation functions of the output layers. For the first-order predictor, softmax is used, while for the second-order predictor, Re LU is employed. Learning is facilitated using the Adam optimizer with a learning rate of 10 4, utilizing cross-entropy as the loss function for the first-order predictor and negative log-likelihood for the second-order predictor.