Computing Linear Restrictions of Neural Networks

Authors: Matthew Sotoudeh, Aditya V. Thakur

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On three different CIFAR10 networks [41], we took each image in the test set and computed the exact IG against a black baseline using Equation 7. We then found em and computed the mean relative error between the exact IG and the approximate one. As shown in Table 1, the approximate IG has an error of 25–45%.
Researcher Affiliation Academia Matthew Sotoudeh Department of Computer Science University of California, Davis Davis, CA 95616 masotoudeh@ucdavis.edu Aditya V. Thakur Department of Computer Science University of California, Davis Davis, CA 95616 avthakur@ucdavis.edu
Pseudocode No The paper describes algorithms (e.g., for ReLU, Max Pool layers, and their composition) and refers to theorems but does not provide structured pseudocode blocks or algorithms labeled as such.
Open Source Code Yes We have made our source code available at https://doi.org/10.5281/zenodo.3520097.
Open Datasets Yes On three different CIFAR10 networks [41], we took each image in the test set and computed the exact IG against a black baseline using Equation 7. [41] ETH robustness analyzer for neural networks (ERAN). https://github.com/eth-sri/ eran, 2019. Accessed: 2019-05-01.
Dataset Splits No The paper mentions using a "test set" for CIFAR10 and analyzing ACAS Xu, but it does not provide specific details on dataset splits (e.g., percentages or counts for training, validation, and test sets) that would be needed for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using CIFAR10 networks and references ERAN, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper describes the methodology of EXACTLINE and its applications, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or training configurations for the neural networks analyzed.