Iterative Circuit Repair Against Formal Specifications

Authors: Matthias Cosler, Frederik Schmitt, Christopher Hahn, Bernd Finkbeiner

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (LTL). Given a defective circuit and its formal specification, we train Transformer models to output circuits that satisfy the corresponding specification... Our model, for example, produces a correct circuit implementation... We demonstrate that the trained separated hierarchical Transformer architecture generalizes to unseen specifications and faulty circuits. Further, we show that our approach can be combined with the existing neural method for synthesizing sequential circuits (Schmitt et al., 2021b) by repairing its mispredictions, improving the overall accuracy substantially. We made a significant improvement of 6.8 percentage points to a total of 84% on held-out-instances, while an even more significant improvement was made on out-of-distribution datasets with 11.8 percentage points on samples from the annual reactive synthesis competition SYNTCOMP (Jacobs et al., 2022a).
Researcher Affiliation Academia Matthias Cosler CISPA Helmholtz Center for Information Security matthias.cosler@cispa.de Frederik Schmitt CISPA Helmholtz Center for Information Security frederik.schmitt@cispa.de Christopher Hahn Stanford University hahn@cs.stanford.edu Bernd Finkbeiner CISPA Helmholtz Center for Information Security finkbeiner@cispa.de
Pseudocode Yes Algorithm 1 Algorithm for introducing errors to correct circuit implementations.
Open Source Code Yes To effectively train and enable further research on repair models, we provide open-source datasets and our open-source implementation for the supervised training of the circuit repair problem1. 1https://github.com/reactive-systems/circuit-repair
Open Datasets Yes To effectively train and enable further research on repair models, we provide open-source datasets and our open-source implementation for the supervised training of the circuit repair problem1... We build on the reactive synthesis dataset from Schmitt et al. (2021b)... We construct our final dataset from Repair Raw in two steps.
Dataset Splits Yes We replicated the neural circuit synthesis model of Schmitt et al. (2021b) and evaluated the model with all specifications from their dataset while keeping the training, validation, and test split separate.
Hardware Specification Yes We trained on an single GPU of a NVIDIA DGX A100 system with a batch size of 256 for 20 000 steps.
Software Dependencies Yes We use the Adam optimizer (Kingma & Ba, 2017) with β1 = 0.9, β2 = 0.98 and ϵ = 10^-9 and 4000 warmup steps with an increased learning rate, as proposed in Vaswani et al. (2017)... We model-check each predicted circuit against its specification with nuXmv (Cavada et al., 2014)...
Experiment Setup Yes We trained a separated hierarchical Transformer with 4 heads in all attention layers, 4 stacked local layers in both separated local layers, and 4 stacked layers in the global layer. The decoder stack contains 8 stacked decoders. The embedding size in the decoder and encoder is 256 and all feedforward networks have a size of 1024 and use the Rectified Linear Unit (ReLU) activation function. We trained on an single GPU of a NVIDIA DGX A100 system with a batch size of 256 for 20 000 steps.