Representer Point Selection for Explaining Deep Neural Networks
Authors: Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, Pradeep K. Ravikumar
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a number of experiments with multiple datasets and evaluate our method s performance and compare with that of the influence functions. |
| Researcher Affiliation | Academia | Chih-Kuan Yeh Joon Sik Kim Ian E.H. Yen Pradeep Ravikumar Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 {cjyeh, joonsikk, eyan, pradeepr}@cs.cmu.edu |
| Pseudocode | No | The paper describes the steps of the algorithm in prose, but does not provide a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | Source code available at github.com/chihkuanyeh/Representer_Point_Selection. |
| Open Datasets | Yes | We perform a number of experiments with multiple datasets and evaluate our method s performance... on CIFAR-10 dataset [15]... in Animals with Attributes (Aw A) dataset [18]. |
| Dataset Splits | No | The paper mentions training data and test data, but does not explicitly provide details about a separate validation split, such as percentages or counts, or cross-validation setup. |
| Hardware Specification | No | The paper does not specify any particular hardware components like CPU models, GPU models, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper mentions using specific models like VGG-16 and Resnet-50, and optimization methods like SGD and LBFGS, but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The L2 weight decay is set to 1e-2 for all methods for fair comparison. We first solve (4) with loss Lsoftmax(Φ(xi, Θ), Φ(xi, Θgiven)) for λ = 0.001, and then calculate Φ(xt, Θ ) = Pn i=1 k(xt, xi, αi) as in (2) for all train and test points. |