Sound and Complete Neural Network Repair with Minimality and Locality Guarantees

Authors: Feisi Fu, Wenchao Li

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this Section, we compare REASSURE with state-of-the-art methods on both point-wise repairs and area repairs. The experiments were designed to answer the following questions: (Effectiveness) How effective is a repair in removing known buggy behaviors? (Locality) How much side effect (i.e. modification outside the patch area in the function space) does a repair produce? (Function Change) How much does a repair change the original neural network in the function space? (Performance) Whether and how much does a repair adversely affect the overall performance of the neural network?
Researcher Affiliation Academia Feisi Fu Division of System Engineering Boston University fufeisi@bu.edu Wenchao Li Department of Electrical and Computer Engineering Boston University wenchao@bu.edu
Pseudocode Yes Algorithm 1 REASSURE Input: A specification Φ = (Φin, Φout), a Re LU DNN f and a set of buggy points {ex1, . . . , ex L} Φin. Output: A repaired Re LU DNN bf.
Open Source Code No The paper mentions external code (e.g., 'MDNN Github repository', 'Sotoudeh & Thakur (2021) does not include a vertex enumeration tool... in their code'), but it does not provide a statement or link for the authors' own source code for the methodology described in the paper.
Open Datasets Yes We train a Re LU DNN on the MNIST dataset Le Cun (1998) as the target DNN.
Dataset Splits No The paper mentions using training, validation, and test data (e.g., 'ND(L1), ND(L2): average (L1, L2) norm difference on validation data' in Table 4), but it does not explicitly specify the percentages or exact counts for the dataset splits used (e.g., '80/10/10 split' or 'X training samples, Y validation samples').
Hardware Specification Yes All experiments were run on an Intel Core i5 @ 3.4 GHz with 32 GB of memory.
Software Dependencies Yes We use Gurobi Gurobi Optimization, LLC (2021) to solve the linear programs. We use pycddlib Troffaes (2018) to perform the vertex enumeration step when evaluating PRDNN.
Experiment Setup Yes Hyperparameters used in Repair: We set γ = 0.5 for Point-wise Repair on MNIST, γ = 0.02 for Watermark Removal, γ = 1 for Area Repair: HCAS and γ = 0.0005 for Point-wise Repair on Image Net. We set learning rate to 10-3 for Retrain in the point-wise repair experiment. We set learning rate to 10-2 and momentum to 0.9 for Fine-Tuning in the point-wise repair experiment.