HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation
Authors: Lu Chen, Siyu Lou, Keyan Zhang, Jin Huang, Quanshi Zhang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experiments |
| Researcher Affiliation | Academia | 1Shanghai Jiao Tong University, China 2Eastern Institute for Advanced Study, China. Correspondence to: Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center, at the Shanghai Jiao Tong University, China. |
| Pseudocode | No | The paper describes computational steps and operations (Equations 5-7) but does not provide a formal pseudocode block or an algorithm labeled as such. |
| Open Source Code | Yes | 1https://github.com/csluchen/harsanyinet |
| Open Datasets | Yes | We trained the Harsanyi-MLP on three tabular datasets from the UCI machine learning repository (Dua & Graff, 2017), including the Census Income dataset (n = 12), the Yeast dataset (n = 8) and the TV news commercial detection dataset (n = 10)... We trained the Harsanyi-CNN on two image datasets: the MNIST dataset (Le Cun & Cortes, 2010) and the CIFAR-10 dataset (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper describes specific data sampling strategies for certain analyses (e.g., 'randomly sampled 8 image regions', 'randomly sampled n = 12 variables'), but it does not provide explicit training, validation, and test split percentages or sample counts for the main model training on the datasets used. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper refers to methods and models like 'FGSM attack' and 'Res Net-50', but it does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | Yes | The Harsanyi-MLP was constructed with 3 cascaded Harsanyi blocks, where each was formulated by following Equations (5) (7), and each Harsanyi block had 100 neurons. The Harsanyi-CNN was constructed with 10 cascaded Harsanyi blocks upon the feature z(0), and each Harsanyi block had 512 16 16 neurons, where 512 is the number of channels. The hyperparameters were set to β = 10 and γ = 100 for Harsanyi-MLP trained on tabular data, and set β = 1000 and γ = 1 for Harsanyi-CNN trained on the image data respectively. |