Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Joint Shapley values: a measure of joint feature importance

Authors: Chris Harris, Richard Pymar, Colin Rowat

ICLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS; 4.1 GAME THEORETICAL; 4.2 THE AI/ML ATTRIBUTION PROBLEM; 4.2.1 SIMULATED DATA; 4.2.2 BOSTON HOUSING DATA; 4.2.3 MOVIE REVIEWS; Table 1: Joint and interaction measures in n = 3 game theory examples; Table 2: Uniform random variables; k = 3; Table 3: Bernoulli(0.5) random variables; k = 3; Table 4: Joint Shapley values for the Boston dataset; Table 5: Examples of local joint Shapley values in the Pang & Lee (2005) movie reviews
Researcher Affiliation Collaboration Chris Harris Tokyo, Japan Raptor Financial Technologies EMAIL; Richard Pymar Economics, Mathematics & Statistics Birkbeck College University of London, UK EMAIL; Colin Rowat Economics University of Birmingham, UK EMAIL
Pseudocode No The paper provides mathematical derivations and proofs, but no pseudocode or algorithm blocks are present.
Open Source Code Yes Our proofs and source code are available in the accompanying supplemental material; all data are taken from the public domain.
Open Datasets Yes For comparability, we follow Dhamdhere et al. (2020) by training a random forest on the Boston housing dataset (Harrison & Rubinfeld, 1978)...; We train a fully connected neural network (two hidden layers, 16 units per layer, Re LU activations) on the binary movie review classifications in Pang & Lee (2005).; all data are taken from the public domain.
Dataset Splits No The paper mentions a "test block" for the movie reviews but does not explicitly specify validation splits or ratios for any of the datasets used in its experiments.
Hardware Specification Yes All experiments are run on a single Intel(R) Core(TM) i7-6820HQ CPU.
Software Dependencies No The paper states "Training details and tuning parameters are provided in the accompanying code" and mentions training a "fully connected neural network" but does not specify software or library versions used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No Training details and tuning parameters are provided in the accompanying code. We train a fully connected neural network (two hidden layers, 16 units per layer, Re LU activations).