Neural Circuit Synthesis from Specification Patterns
Authors: Frederik Schmitt, Christopher Hahn, Markus N Rabe, Bernd Finkbeiner
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL). We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions, and even out-of-distribution examples from a recent case study. |
| Researcher Affiliation | Collaboration | Frederik Schmitt CISPA Helmholtz Center for Information Security Saarbrücken, Germany frederik.schmitt@cispa.de Christopher Hahn CISPA Helmholtz Center for Information Security Saarbrücken, Germany christopher.hahn@cispa.de Markus N. Rabe Google Research Mountain View, California, USA mrabe@google.com Bernd Finkbeiner CISPA Helmholtz Center for Information Security Saarbrücken, Germany finkbeiner@cispa.de |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code, our data sets, and data generators are part of the Python library ML2 (https://github.com/ reactive-systems/ml2). |
| Open Datasets | Yes | From the LTL track of SYNTCOMP 2020 [26] we collected 346 benchmarks in Temporal Logic Synthesis Format (TLSF) [27]. Following the described method, we constructed a dataset containing 250 000 unique samples split into 200 000 training samples, 25 000 validation samples, and 25 000 test samples. The code, our data sets, and data generators are part of the Python library ML2 (https://github.com/ reactive-systems/ml2). |
| Dataset Splits | Yes | Following the described method, we constructed a dataset containing 250 000 unique samples split into 200 000 training samples, 25 000 validation samples, and 25 000 test samples. |
| Hardware Specification | Yes | We trained on an NVIDIA DGX A100 system for around 10 hours. |
| Software Dependencies | No | The paper mentions software components such as 'Deep LTL', 'Adam optimizer', and 'nu Xmv model checker', but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We trained hierarchical Transformers with model dimension 256. The dimension of the feed-forward networks was set to 1024. The encoder employs 4 local layers followed by 4 global layers, and the decoder employs 8 (unmodified) layers. All our attention layers use 4 attention heads. We trained with a batch size of 256 for 30 000 steps and saved the model with the best accuracy per sequence on the validation data. We trained all models using the Adam optimizer [29] with β1 = 0.9, β2 = 0.98 and ϵ = 10 9. The optimizer was used with a learning rate schedule proposed by Vaswani et al. [51] that increases the learning rate linear for a given number of warmup steps followed by a decreasing learning rate proportionally to the inverse square root of the step number. In our experiments, we used 4000 warmup steps as proposed by Vaswani et al. [51]. |