Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Strong Syntax Splitting for Iterated Belief Revision

Authors: Gabriele Kern-Isberner, Gerhard Brewka

IJCAI 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we generalize syntax splitting from logical sentences to epistemic states, a step which is necessary to cover iterated revision. The generalization is based on the notion of marginalization of epistemic states. Furthermore, we study epistemic syntax splitting in the context of ordinal conditional functions. Our approach substantially generalizes the semantical treatment of (P) in terms of faithful preorders recently presented by Peppas and colleagues.
Researcher Affiliation Academia Gabriele Kern-Isberner1 and Gerhard Brewka2 1Fakult at f ur Informatik, Technische Universit at Dortmund, Germany 2Institut f ur Informatik, Universit at Leipzig, Germany EMAIL EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper does not describe any empirical experiments involving datasets, nor does it provide access information for any publicly available datasets.
Dataset Splits No The paper does not describe any empirical experiments involving datasets or data splits.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers needed to replicate the experiment.
Experiment Setup No The paper does not describe any empirical experiments or their setup, thus no hyperparameter values or training configurations are provided.