Sound and Complete Verification of Polynomial Networks
Authors: Elias Abad Rocamora, Mehmet Fatih Sahin, Fanghui Liu, Grigorios Chrysos, Volkan Cevher
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. |
| Researcher Affiliation | Academia | Elias Abad Rocamora LIONS, EPFL Lausanne, Switzerland abad.elias00@gmail.com Mehmet Fatih Sahin LIONS, EPFL Lausanne, Switzerland mehmet.sahin@epfl.ch Fanghui Liu LIONS, EPFL Lausanne, Switzerland fanghui.liu@epfl.ch Grigorios G Chrysos LIONS, EPFL Lausanne, Switzerland grigorios.chrysos@epfl.ch Volkan Cevher LIONS, EPFL Lausanne, Switzerland volkan.cevher@epfl.ch |
| Pseudocode | No | The paper includes a schematic overview of the algorithm in Figure 2, but it is a flowchart and not a structured pseudocode or algorithm block. |
| Open Source Code | Yes | The source code is publicly available at https://github.com/megaelius/PNVerification. ... To encourage the community to improve the verification of PNs, we make our code publicly available in https://github.com/megaelius/PNVerification. |
| Open Datasets | Yes | We thoroughly evaluate our method over the popular image classification datasets MNIST [Le Cun et al., 1998], CIFAR10 [Krizhevsky et al., 2014] and STL10 [Coates et al., 2011]. |
| Dataset Splits | No | The paper mentions using the "test dataset" for experiments and specifies training parameters, but does not explicitly provide details about training, validation, and test dataset splits (e.g., percentages or counts) or a methodology for splitting beyond the test set. |
| Hardware Specification | Yes | All of our experiments were conducted on a single GPU node equipped with a 32 GB NVIDIA V100 PCIe. |
| Software Dependencies | No | The paper mentions using Gurobi for comparison, but does not provide a specific version number for Gurobi or any other software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Unless otherwise specified, every network is trained for 100 epochs with Stochastic Gradiend Descent (SGD), with a learning rate of 0.001, which is divided by 10 at epochs [40, 60, 80], momentum 0.9, weight decay 5e-5 and batch size 128. |