SOL: Sampling-based Optimal Linear bounding of arbitrary scalar functions
Authors: Yuriy Biktairov, Jyotirmoy Deshmukh
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide empirical evidence of SOL s practicality by incorporating it into a robustness certifier and observing that it produces similar or higher certification rates while taking as low as quarter of the time compared to the other methods. |
| Researcher Affiliation | Academia | Yuriy Biktairov Jyotirmoy Deshmukh University of Southern California {biktairo, jdeshmuk}@usc.edu |
| Pseudocode | Yes | Algorithm 1 Adaptive SOL Algorithm 2 1D bisect algorithm |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Three of them were trained on MNIST[15] while the other three on CIFAR[14]. |
| Dataset Splits | No | The paper does not explicitly provide details about the train, validation, or test dataset splits used for training the neural networks themselves. It only mentions that the networks were 'trained on MNIST[15] while the other three on CIFAR[14]' and then describes how samples from the 'test part' were used for certification. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments or training the models. It only discusses software and dataset generation. |
| Software Dependencies | Yes | The approaches we investigate include two well-known optimization libraries capable of solving LP: Gurobi[7] and Sci Py[31]. |
| Experiment Setup | Yes | The dataset was generated by sampling 2000 discrete problem instances for each of the following activation functions: Ge LU, Log Log, Swish. ... Each problem instance has the region s R = [l, r] boundaries sampled uniformly from the [ 2, 2] interval and contains 500 points sampled uniformly from the region. ... We use perturbation magnitude of 8/255 for networks trained on MNIST and 1/255 for CIFAR networks as is common in the literature. |