Linear Explanations for Individual Neurons
Authors: Tuomas Oikarinen, Tsui-Wei Weng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5, where we evaluate and compare our method and existing automated explanation methods. and 5. Experiment Results |
| Researcher Affiliation | Academia | 1CSE, UC San Diego, CA, USA 2HDSI, UC San Diego, CA, USA. Correspondence to: Tuomas Oikarinen <toikarinen@ucsd.edu>, Tsui-Wei Weng <lweng@ucsd.edu>. |
| Pseudocode | Yes | Algorithm 1 Greedy search. Python pseudo-code. |
| Open Source Code | Yes | Our code and results are available at https://github.com/TrustworthyML-Lab/Linear-Explanations. |
| Open Datasets | Yes | Res Net-50 (Image Net), Res Net-18 (Places365), VGG-16 (CIFAR-100), Vi T-B/16 (Image Net) |
| Dataset Splits | Yes | Throughout the paper we use a 70-10-20 split to divide our Dprobe into train, validation and test set splits to avoid overfitting our explanations. |
| Hardware Specification | Yes | Both of our explanation methods take around 4 seconds per explained neuron on a machine with a single Tesla V100 GPU. |
| Software Dependencies | No | No specific version numbers for key software components like programming languages or deep learning frameworks were provided. It mentions 'GLM-Saga package' and 'Sig LIP' but without versions. |
| Experiment Setup | Yes | In our experiments, (v, r, ϵ) are set to be 10, 10, 0.02 respectively. and where Rη(wk) = (1 η) 1 2||wk||2 2 + η||wk||1 and η is a hyperparameter, set to 0.99 in our experiments. |