Sensitivity Analysis of Deep Neural Networks
Authors: Hai Shu, Hongtu Zhu4943-4950
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show reasonably good performance of the proposed measure for the popular DNN models Res Net50 and Dense Net121 on CIFAR10 and MNIST datasets. |
| Researcher Affiliation | Collaboration | Hai Shu Department of Biostatistics The University of Texas MD Anderson Cancer Center Houston, Texas, USA | Hongtu Zhu AI Labs, Didi Chuxing Beijing, China zhuhongtu@didiglobal.com |
| Pseudocode | No | The paper describes mathematical models and computational steps but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about making its source code available or provide links to a code repository. |
| Open Datasets | Yes | We conduct experiments on the two benchmark datasets CIFAR10 and MNIST using the two popular DNN models Res Net50 (He et al. 2016) and Dense Net121 (Huang et al. 2017). |
| Dataset Splits | Yes | Originally, there are 50,000 and 60,000 training images for CIFAR10 and MNIST, respectively. As the validation sets, we use randomly selected 10% of those images, with the same number for each class. |
| Hardware Specification | No | The paper does not specify any particular hardware details such as GPU models, CPU types, or memory used for running the experiments. It mentions using deep learning libraries like TensorFlow and PyTorch, which implies computational resources, but no specific specifications are provided. |
| Software Dependencies | No | The paper mentions using 'deep learning libraries like Tensor Flow (Abadi et al. 2016) and Pytorch (Paszke et al. 2017)', but it does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | In this section, we investigate the performance of our local influence measure. We address the four tasks stated in Introduction through the following setups under the three perturbation cases in Section 2.3. |