Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

Authors: Ethan Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An experimental evaluation of our algorithm in two applications: nonlinear sparse regression using pairwise products of features and interpretability of black-box neural network classifiers.
Researcher Affiliation Academia Ethan R. Elenberg Department of Electrical and Computer Engineering The University of Texas at Austin elenberg@utexas.edu Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin dimakis@austin.utexas.edu Moran Feldman Department of Mathematics and Computer Science Open University of Israel moranfe@openu.ac.il Amin Karbasi Department of Electrical Engineering Department of Computer Science Yale University amin.karbasi@yale.edu
Pseudocode Yes Algorithm 1 THRESHOLD GREEDY(f, k, )
Open Source Code Yes Code for these experiments is available at https://github.com/eelenberg/streak.
Open Datasets Yes In this experiment, a sparse logistic regression is fit on 2000 training and 2000 test observations from the Phishing dataset [Lichman, 2013].
Dataset Splits No The paper mentions '2000 training and 2000 test observations' for the Phishing dataset but does not specify validation splits or other detailed splitting methodologies.
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as exact GPU/CPU models or memory amounts.
Software Dependencies No The paper mentions Inception V3 and LIME but does not specify versions for these or any other software components or libraries.
Experiment Setup Yes Figure 1(a) shows both the final log likelihood and the generalization accuracy for RANDOMSUBSET, LOCALSEARCH, and our STREAK algorithm for " = {0.75, 0.1} and k = {20, 40, 80}.