i-Algebra: Towards Interactive Interpretability of Deep Neural Networks
Authors: Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang11691-11698
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We prototype i-Algebra and conduct user studies in a set of representative analysis tasks, including inspecting adversarial inputs, resolving model inconsistency, and cleansing contaminated data, all demonstrating its promising usability. |
| Researcher Affiliation | Academia | Xinyang Zhang,1 Ren Pang,1 Shouling Ji,2 Fenglong Ma,1 Ting Wang1 1Pennsylvania State University, 2Zhejiang University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It describes operators and a query language but not in a formal pseudocode format. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. There is no mention of a repository link, an explicit code release statement, or code being available in supplementary materials. |
| Open Datasets | Yes | Setting On CIFAR10, we train two VGG19 models f and f. [...] Setting We use Image Net as the dataset and consider a pre-trained Res Net50 (77.15% top-1 accuracy) as the target DNN. |
| Dataset Splits | No | The paper references datasets like CIFAR10 but does not provide specific train/validation/test dataset splits (percentages or counts) that were used during model training to reproduce the experiment. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, lacking specific GPU/CPU models, processor types, or detailed computer specifications. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python version, PyTorch/TensorFlow version, or other library versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions training models (VGG19) and using pre-trained models (ResNet50) but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or other training configurations. |