Qualitative Spatial Logic over 2D Euclidean Spaces Is Not Finitely Axiomatisable

Authors: Heshan Du, Natasha Alechina2776-2783

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We answer this question negatively by showing that the axiomatisations presented in (Du et al. 2013; Du and Alechina 2016) are not complete for 2D Euclidean spaces and, moreover, the logics are not finitely axiomatisable.
Researcher Affiliation Academia Heshan Du University of Nottingham Ningbo China Ningbo, China heshan.du@nottingham.edu.cn Natasha Alechina University of Nottingham Nottingham, UK nza@cs.nott.ac.uk
Pseudocode No The paper provides formal definitions, axioms, and proofs, but no pseudocode or algorithm blocks are present.
Open Source Code No The paper mentions using and evaluating third-party tools like Redlog and QEPCAD B, but it does not provide source code for the theoretical work presented in the paper itself.
Open Datasets No The paper is theoretical and does not involve training models on datasets. The mention of 'geospatial data' refers to the application domain, not experimental data used in this paper.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, thus no dataset split information for training, validation, or testing is provided.
Hardware Specification Yes We experimented with Redlog (a part of the computer algebra system Reduce, Free PSL version, revision 4726, 16 August 2018) and QEPCAD B (v.1.69, 16 March 2012) on a 2.4 GHz Intel Core i7, 8 GB 1600 MHz DDR3 Mac Book Pro.
Software Dependencies Yes We experimented with Redlog (a part of the computer algebra system Reduce, Free PSL version, revision 4726, 16 August 2018) and QEPCAD B (v.1.69, 16 March 2012) on a 2.4 GHz Intel Core i7, 8 GB 1600 MHz DDR3 Mac Book Pro.
Experiment Setup No The paper describes testing existing tools with a 'simple formula with 3 names' but does not provide specific experimental setup details like hyperparameters, training configurations, or system-level settings for its own research.