Learning Important Features Through Propagating Activation Differences
Authors: Avanti Shrikumar, Peyton Greenside, Anshul Kundaje
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply Deep LIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. (Abstract) |
| Researcher Affiliation | Academia | 1Stanford University, Stanford, California, USA. |
| Pseudocode | No | The paper describes its rules and method using mathematical formulations but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Video tutorial: http://goo.gl/qKb7pL, code: http://goo.gl/RM8jvH. |
| Open Datasets | Yes | We train a convolutional neural network on MNIST (Le Cun et al., 1999) using Keras (Chollet, 2015) to perform digit classification and obtain 99.2% test-set accuracy. |
| Dataset Splits | No | The paper mentions training on MNIST and evaluating on its 'test-set', but it does not explicitly specify train/validation/test dataset splits, percentages, or sample counts needed to reproduce the partitioning. |
| Hardware Specification | No | The paper does not specify any particular hardware used for the experiments, such as GPU models, CPU models, or memory details. |
| Software Dependencies | No | The paper mentions using 'Keras (Chollet, 2015)' but does not specify a version number for Keras or any other software dependencies. |
| Experiment Setup | Yes | The architecture consists of two convolutional layers, followed by a fully connected layer, followed by the softmax output layer (see Appendix D for full details on model architecture and training). (Section 4.1) Details of the simulation, network architecture and predictive performance are given in Appendix F. (Section 4.2) |