Abstraction based Output Range Analysis for Neural Networks
Authors: Pavithra Prabhakar, Zahra Rahimi Afzal
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results highlight the trade-off between the computation time and the precision of the computed output range. In this section, we present our experimental analysis using a Python toolbox that implements the abstraction procedure and the reduction of the INN output range computation to MILP solving. |
| Researcher Affiliation | Academia | Pavithra Prabhakar , Zahra Rahimi Afzal Department of Computer Science Kansas State University Manhattan, KS 66506 {pprabhakar,zrahimi}@ksu.edu |
| Pseudocode | No | The paper describes methods through mathematical equations and textual explanations, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'We have implemented our algorithm in a Python toolbox.' However, it does not provide any concrete access information (e.g., repository link, explicit release statement) for this code. |
| Open Datasets | Yes | We consider as a case study ACAS Xu benchmarks, which are neural networks with 6 hidden layer with each layer consisting of 50 neurons [2]. |
| Dataset Splits | No | The paper mentions using 'ACAS Xu benchmarks' but does not specify any dataset splits (training, validation, or test) for these benchmarks. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'Python toolbox' and 'Gurobi' for MILP solving, but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We consider abstractions of the benchmark with different number of abstract nodes, namely, 2, 4, 8, 16, 32, which are generated randomly. For a fixed number of abstract nodes, we perform 30 different random runs, and measure the average, maximum and minimum time for different parts of the analysis. |