Scaling the Convex Barrier with Active Sets
Authors: Alessandro De Palma, Harkirat Behl, Rudy R Bunel, Philip Torr, M. Pawan Kumar
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the effectiveness of our method under two settings. On incomplete verification ( 5.1), we assess the speed and quality of bounds compared to other bounding algorithms. On complete verification ( 5.2), we examine whether our speed-accuracy trade-offs correspond to faster exact verification. |
| Researcher Affiliation | Academia | Alessandro De Palma , Harkirat Singh Behl , Rudy Bunel, Philip H.S. Torr, M. Pawan Kumar University of Oxford {adepalma,harkirat,phst,pawan}@robots.ox.ac.uk bunel.rudy@gmail.com |
| Pseudocode | Yes | Pseudo-code can be found in appendix D. |
| Open Source Code | Yes | Our implementation is based on Pytorch (Paszke et al., 2017) and is available at https://github.com/oval-group/scaling-the-convex-barrier. |
| Open Datasets | Yes | We next evaluate the performance on complete verification, verifying the adversarial robustness of a network to perturbations in ℓ norm on a subset of the dataset by Lu & Kumar (2020)... |
| Dataset Splits | No | The paper mentions using 'CIFAR-10 test set' and 'MNIST test set' for evaluation, and also discusses pre-trained networks. However, it does not explicitly provide details on how the training and validation sets were split for model development or fine-tuning, or if standard pre-defined splits for all three subsets were utilized consistently for the models they trained or adapted. |
| Hardware Specification | Yes | All the experiments and bounding computations (including intermediate bounds) were run on a single Nvidia Titan Xp GPU, except Gurobi-based methods and Active Set CPU. These were run on i7-6850K CPUs, utilising 4 cores for the incomplete verification experiments, and 6 cores for the more demanding complete verification experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Gurobi' but does not specify their version numbers or other software dependencies with version numbers. |
| Experiment Setup | Yes | For Big-M, replicating the findings by Bunel et al. (2020a) on their supergradient method, we linearly decrease the step size from 10 2 to 10 4. Active Set is initialized with 500 Big-M iterations, after which the step size is reset and linearly scaled from 10 3 to 10 6. We found the addition of variables to the active set to be effective before convergence: we add variables every 450 iterations, without re-scaling the step size again. Every addition consists of 2 new variables... |