A Novel Symbolic Approach to Verifying Epistemic Properties of Programs

Authors: Nikos Gorogiannis, Franco Raimondi, Ioana Boureanu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We put forward an implementation using existing solvers, and report experimental results showing that the approach can perform better than state-of-the-art symbolic model checkers for temporal-epistemic logic.
Researcher Affiliation Academia Nikos Gorogiannis, Franco Raimondi Department of Computer Science Middlesex University, London, UK {n.gkorogiannis,f.raimondi}@mdx.ac.uk Ioana Boureanu Department of Computer Science University of Surrey, Guilford, UK i.boureanu@surrey.ac.uk
Pseudocode No The paper includes a table in Section 3.1 titled 'Command C SP(φ, C)' which defines the strongest postcondition for a simple programming language, but it is not explicitly labeled as 'Pseudocode' or an 'Algorithm' block.
Open Source Code Yes Our code is released open-source at goo.gl/so8b Qc.
Open Datasets No The paper uses the Dining Cryptographers problem and the Three Ballot voting protocol as case studies. While these are well-defined problems/protocols, the paper does not refer to them as 'datasets' and does not provide specific links, DOIs, or citations to actual data files or repositories for them, but rather models of these systems.
Dataset Splits No The paper models logical systems and does not use datasets in the typical machine learning sense, therefore, it does not provide specific train/validation/test dataset splits.
Hardware Specification Yes All the experiments2 have been performed on a 4-core 2.4 GHz Intel Core i7 Mac Book Pro with 16 GB of RAM running OS X 10.11.6.
Software Dependencies Yes The version of MCMAS is 1.2.2 and Z3 is 4.5.1; both tools have been compiled from source on the target machine.
Experiment Setup No The paper describes the case studies and the formulas verified, and mentions using Z3 and MCMAS, but it does not specify concrete hyperparameters, optimizer settings, or explicit system-level training configurations typically found in experimental setups for models.