Marginal Contribution Feature Importance - an Axiomatic Approach for Explaining Data
Authors: Amnon Catav, Boyang Fu, Yazeed Zoabi, Ahuva Libi Weiss Meilik, Noam Shomron, Jason Ernst, Sriram Sankararaman, Ran Gilad-Bachrach
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We analyze the theoretical properties of this score function and demonstrate its merits empirically. [...] In this section we analyze the performance of MCI empirically and compare it to other methods. |
| Researcher Affiliation | Academia | 1School of Computer Science, Tel-Aviv University, Tel-Aviv, Israel 2Computer Science Department, University of California, Los Angeles, USA 3Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel 4I-Medata AI Center, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel 5Department of Computational Medicine, University of California, Los Angeles, USA 6Department of Biological Chemistry, University of California, Los Angeles, USA 7Department of Human Genetics, University of California, Los Angeles, USA 8Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | 2https://github.com/TAU-MLwell/Marginal-Contribution-Feature-Importance |
| Open Datasets | Yes | We use a gene microarray dataset (BRCA) (Tomczak et al., 2015) [...] different datasets from the UCI repository (Asuncion & Newman, 2007). |
| Dataset Splits | Yes | The dataset consists of 100K examples with a split of 70%/10%/20% for train, validation and test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | Yes | We train each model using the Scikit-learn package, with its default hyperparameters (Pedregosa et al., 2011). [...] a gradient boosting trees model trained with Light GBM (version 2.3.0) |
| Experiment Setup | Yes | Models are trained with a batch size of 512 for 1,000 epochs using early stopping when validation accuracy did not improve for 50 consecutive epochs. [...] We train each model using the Scikit-learn package, with its default hyperparameters (Pedregosa et al., 2011). |