Learning Loop Invariants for Program Verification
Authors: Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, Le Song
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate CODE2INV on a suite of 133 benchmark problems and compare it to three state-of-the-art systems. |
| Researcher Affiliation | Collaboration | Xujie Si University of Pennsylvania xsi@cis.upenn.edu Hanjun Dai Georgia Tech hanjundai@gatech.edu Mukund Raghothaman University of Pennsylvania rmukund@cis.upenn.edu Mayur Naik University of Pennsylvania mhnaik@cis.upenn.edu Le Song Georgia Tech and Ant Financial lsong@cc.gatech.edu |
| Pseudocode | No | The paper describes models and processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are publicly available from https://github.com/PL-ML/code2inv |
| Open Datasets | Yes | We evaluate CODE2INV on a suite of 133 benchmark programs from recent works [3, 7, 8] and the 2017 Sy Gu S competition [31].2 Our code and data are publicly available from https://github.com/PL-ML/code2inv |
| Dataset Splits | No | The paper mentions using a dataset and pre-training on a subset, but does not provide specific train/validation/test dataset splits with percentages or counts for reproducibility. |
| Hardware Specification | Yes | We run all solvers on a single 2.4 GHz AMD CPU core up to 12 hours and using up to 4 GB memory for each program. |
| Software Dependencies | No | The paper mentions the Z3 theorem prover but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | No | The paper describes the reinforcement learning setup, including reward design and training with A2C, but does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size, number of epochs). |