On the Tractability of SHAP Explanations under Markovian Distributions

Authors: Reda Marzouk, Colin De La Higuera

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this article, we investigate the computational complexity of the SHAP score by relaxing this assumption and introducing a Markovian perspective. We show that, under the Markovian assumption, computing the SHAP score for the class of Weighted automata, Disjoint DNFs and Decision Trees can be performed in polynomial time, offering a first positive complexity result for the problem of SHAP score computation that transcends the limitations of the feature independence assumption.
Researcher Affiliation Academia 1LS2N, Universit e de Nantes, France. Correspondence to: Reda Marzouk <mohamed-reda.marzouk@univ-nantes.fr>.
Pseudocode No The paper describes algorithmic steps in prose (e.g., in Appendix C for constructing A1, A2, A3) but does not present them as formal pseudocode or in an algorithm block.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the methodology described.
Open Datasets No This is a theoretical paper on computational complexity and tractability proofs. It does not involve experimental training on datasets, so no dataset information is provided.
Dataset Splits No This is a theoretical paper on computational complexity and tractability proofs, not an empirical study with data. Therefore, no dataset splits for training, validation, or testing are mentioned.
Hardware Specification No This is a theoretical paper focused on proofs of computational tractability. It does not mention any specific hardware used for running experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No This is a theoretical paper on computational complexity proofs, not an empirical study. Therefore, no experimental setup details like hyperparameters or training settings are provided.