Explainability as statistical inference
Authors: Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose new datasets with ground truth selection which allow for the evaluation of the features importance map and show experimentally that multiple imputation provides more reasonable interpretations. |
| Researcher Affiliation | Academia | 1Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark 2Universit e Cˆote d Azur, Inria, Maasai, LJAD, CNRS, Nice, France. Correspondence to: Hugo Henri Joseph Senetaire <hhjs@dtu.dk>. |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block found. |
| Open Source Code | No | No explicit statement or link providing concrete access to the source code for the methodology described in this paper. |
| Open Datasets | Yes | For each dataset, we generate 5 different datasets containing each 10,000 train samples and 10, 000 test samples. |
| Dataset Splits | Yes | The split between train and validation is split randomly with proportion 80%, 20%. Hence, the train dataset of the switching panels input contain 48,000 images, the validation dataset contains 12,000 images and the test dataset 10,000 images. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, memory) used for running experiments are provided. |
| Software Dependencies | No | The paper mentions software components like "Adam", "U-Net", "Quickshift", "SHAP", "FASTSHAP", "Sklearn", but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | We trained all the methods for 1000 epochs using Adam for optimisation with a learning rate 10 4 and weight decay 10 3 with a batch size of 1000. |