Local vs. Global Interpretability: A Computational Complexity Perspective

Authors: Shahaf Bassan, Guy Amir, Guy Katz

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a framework for bridging this gap, by using computational complexity theory to assess local and global perspectives of interpreting ML models. We begin by proposing proofs for two novel insights that are essential for our analysis: (i) a duality between local and global forms of explanations; and (ii) the inherent uniqueness of certain global explanation forms.
Researcher Affiliation Academia 1The Hebrew University of Jerusalem, Jerusalem, Israel. Correspondence to: Shahaf Bassan <shahaf.bassan@mail.huji.ac.il>.
Pseudocode Yes Algorithm 1 Local Subset Minimal Sufficient Reason
Open Source Code No The paper does not mention releasing any source code for the methodology described.
Open Datasets No The paper is theoretical and does not involve empirical studies, therefore no datasets are used or described as publicly available.
Dataset Splits No The paper is theoretical and does not report on experiments, so no training, validation, or test splits are described.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or specific hardware used for its analysis.
Software Dependencies No The paper is theoretical and does not specify software dependencies with version numbers, as it does not report on experiments requiring such details.
Experiment Setup No The paper is theoretical and does not detail an experimental setup, hyperparameters, or training configurations.