Fooling a Complete Neural Network Verifier

Authors: Dániel Zombori, Balázs Bánhelyi, Tibor Csendes, István Megyeri, Márk Jelasity

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation shows that the attack is successful. We evaluated MIPVerify experimentally using two commercial solvers: Gurobi (Gurobi, 2020), and CPLEX (CPLEX, 2020), and the open source GLPK (GLPK, 2020). During these evaluations, we experimented with different values of σ and ω to see whether our adversarial networks could fool the MIPVerify approach.
Researcher Affiliation Academia D aniel Zombori, Bal azs B anhelyi, Tibor Csendes, Istv an Megyeri, M ark Jelasity Institute of Informatics, University of Szeged, Hungary {zomborid, banhelyi, csendes, imegyeri, jelasity}@inf.u-szeged.hu
Pseudocode No The paper includes network diagrams (Figure 1, Figure 2, Figure 3) to illustrate the adversarial networks but does not provide any formal pseudocode or algorithm blocks.
Open Source Code Yes The code is shared at https://github.com/szegedai/nn_backdoor.
Open Datasets Yes We will work with the MNIST dataset and we fix the backdoor pattern to be the top left pixel being larger than 0.05 (assuming the pixels are in [0, 1]).
Dataset Splits No The paper mentions evaluating on the 'test set of the MNIST dataset' but does not specify training, validation, or other explicit dataset splits (e.g., percentages or sample counts) needed for full reproducibility of data partitioning.
Hardware Specification Yes CPU: Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
Software Dependencies Yes Julia version 1.5.0; Gurobi: Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64); Gurobi julia package: Gurobi v0.8.1; CPLEX: IBM(R) ILOG(R) CPLEX(R) Interactive Optimizer 12.10.0.0; CPLEX julia package: CPLEX v0.6.6; GLPK julia package: GLPK v0.13.0, GLPKMath Prog Interface v0.5.0; MIPVerify julia package: MIPVerify v0.2.3; Ju MP julia package: Ju MP v0.18.6, Conditional Ju MP v0.1.0; Math Prog Base julia package: Math Prog Base v0.7.8
Experiment Setup Yes We randomly generated 500 values for σ from the interval [−15, 2] and for all the sampled σ values we tested ω values 254, 255, . . . , 270. For our evaluation, we selected an MNIST classifier described in Wong & Kolter (2018) and used in (Tjeng et al., 2019) to evaluate MIPVerify. We will refer to this network as WK17a. It has two convolutional layers (stride length: 2) with 16 and 32 filters (size: 4 × 4) respectively, followed by a fully-connected layer with 100 units. All these layers use Re LU activations.