Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction

Authors: Wei Qian, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, Mengdi Huai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments to evaluate the performance of the proposed method (i.e., un SENN). Due to space limitations, more experimental details and results (e.g., experiments on more self-explaining models and running time) can be found in the full version of this paper. Real-world datasets. In experiments, we adopt the following real-world datasets: CIFAR-100 Super-class (Fischer et al. 2019) and MNIST (Deng 2012).
Researcher Affiliation Academia Wei Qian*1, Chenxu Zhao*1, Yangyi Li*1, Fenglong Ma2, Chao Zhang3, Mengdi Huai1 1Iowa State University 2Pennsylvania State University 3Georgia Institute of Technology {wqi, cxzhao, liyangyi, mdhuai}@iastate.edu, fenglong@psu.edu, chaozhang@gatech.edu
Pseudocode Yes Algorithm 1: Uncertainty quantification for self-explaining neural networks
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the described methodology.
Open Datasets Yes Real-world datasets. In experiments, we adopt the following real-world datasets: CIFAR-100 Super-class (Fischer et al. 2019) and MNIST (Deng 2012).
Dataset Splits Yes To train a self-explaining network, we first split the available dataset D = {(xi, ci, yi)}N i=1 into a training set Dtra and a calibration set Dcal, where Dtra Dcal = and Dtra Dcal = D. [...] For the calibration set, we randomly hold out 10% of the original available dataset to compute the non-conformity scores.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No The paper mentions using models like ResNet-50, CNN, and MLP, but it does not specify any software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup No The paper specifies the models used (ResNet-50, CNN, MLP) and that 10% of the data is used for calibration, and experiments are run 10 times. However, it does not provide specific hyperparameter values like learning rate, batch size, or optimizer settings necessary for a detailed experimental setup.