Contraction and Revision over DL-Lite TBoxes

Authors: Zhiqiang Zhuang, Zhe Wang, Kewen Wang, Guilin Qi

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The key to our approach is the introduction of an alternative semantics called type semantics which is more succinct than DL semantics. More importantly, with a finite signature, type semantics always yields finite humber of models. We then define model-based contraction and revision for DL-Lite TBoxes under type semantics and provide representation theorems for them. Finally, the succinctness of type semantics allows us to develop tractable algorithms for both operations.
Researcher Affiliation Academia Zhiqiang Zhuang1 Zhe Wang1 Kewen Wang1 Guilin Qi2,3 1 School of Information and Communication Technology, Griffith University, Australia 2 School of Computer Science and Engineering, Southeast University, China 3 State Key Lab for Novel Software Technology, Nanjing University, Nanjing, China
Pseudocode Yes Algorithm 1: CONT; Algorithm 2: REVI
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets No The paper is theoretical and focuses on logical formalisms and algorithms for DL-Lite TBoxes, not empirical training on datasets. Therefore, no information about public dataset availability for training is provided.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, so there are no details regarding validation splits.
Hardware Specification No The paper is theoretical and does not describe empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and describes algorithms and logical formalisms, but does not specify any software dependencies with version numbers required for replication.
Experiment Setup No The paper is theoretical and describes algorithms and logical frameworks, not empirical experiments that would require details about hyperparameters or system-level training settings.