CLN2INV: Learning Loop Invariants with Continuous Logic Networks

Authors: Gabriel Ryan, Justin Wong, Jianan Yao, Ronghui Gu, Suman Jana

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the performance of CLN2INV with two existing methods and demonstrate the efficacy of the method on several more difficult problems. Finally, we conduct two ablation studies to justify our design choices. Test Dataset. We use the same benchmark used in the evaluation of Code2Inv.
Researcher Affiliation Collaboration Department of Computer Science, Columbia University, New York, NY, USA {gabe,jianan,rgu,suman}@cs.columbia.edu {justin.wong}@columbia.edu Certi K, New York, NY, USA {ronghui.gu}@certik.org
Pseudocode Yes Figure 9: Pseudocode for Polynomial Invariant Problem. Algorithm 1: Template Generation Algorithm.
Open Source Code Yes Our source code and benchmarks are publicly available on Github1. 1https://github.com/gryan11/cln2inv
Open Datasets Yes Test Dataset. We use the same benchmark used in the evaluation of Code2Inv. We have removed nine invalid programs from Code2Inv s benchmark and test on the remaining 124. The removed programs are invalid because there are inputs which satisfy the precondition but result in a violation of the post-condition. See Appendix G for details on the removed problems. The benchmark consists of loops expressed as C code and corresponding SMT files.
Dataset Splits No The paper describes generating training data and mentions a 'Test Dataset', but it does not specify explicit training/validation/test splits (e.g., percentages or counts for each) for its experiments.
Hardware Specification Yes All experiments are performed on an Ubuntu 18.04 server with an Intel Xeon E5-2623 v4 2.60GHz CPU, 256Gb of memory, and an Nvidia GTX 1080Ti GPU.
Software Dependencies No The paper mentions 'PyTorch' and 'Z3' but does not specify their version numbers. It says: 'We implement CLNs in Py Torch and use the Adam optimizer for training... Invariant checking is performed using SMT solvers such as Z3 (De Moura & Bjørner, 2008).'
Experiment Setup Yes We implement CLNs in Py Torch and use the Adam optimizer for training with learning rate 0.01 (Paszke et al., 2017; Kingma & Ba, 2014). Because the performance of CLN is dependent on weight initialization, the CLN training randomly restart if the model does not reach termination within 2, 000 epochs. Learnable parameters are initialized from a uniform distribution in the range [-1, 1], which we found works well in practice. ... (for experiments in this paper, we set the max loop iterations to 50)... For all of our experiments in this paper, we set r to 10.