Neural Network Branching for Neural Network Verification

Authors: Jingyue Lu, M. Pawan Kumar

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our framework achieves roughly 50% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy.
Researcher Affiliation Academia Jingyue Lu University of Oxford jingyue.lu@spc.ox.ac.uk M. Pawan Kumar University of Oxford pawan@robots.ox.ac.uk
Pseudocode Yes Algorithm 1 Branch and Bound
Open Source Code Yes Code for all experiments is available at https://github.com/oval-group/GNN_branching.
Open Datasets Yes We adopt a similar network structure but using a more challenging dataset, namely CIFAR-10, for an increased difficulty level.
Dataset Splits Yes We use 430 properties to generate 17958 training samples and the rest of properties to generate 5923 validation samples.
Hardware Specification No The paper states 'We ran all verification experiments in parallel on 16 CPU cores' but does not specify particular CPU models (e.g., Intel Xeon, AMD Ryzen) or GPU models.
Software Dependencies No The paper mentions using 'Gurobi' and 'Adam optimizer' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We compute intermediate bounds using linear bounds relaxations... For the output lower bound, we use Planet relaxation... Adam optimizer with weight decay rate λ = 1e 4 and learning rate 1e 4... The batch size is set to 2... The threshold is set to be 0.2... γ = 1 and t = 0.1 in the loss function lossonline.