Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Trust as a Precursor to Belief Revision

Authors: Richard Booth, Aaron Hunter

JAIR 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce a model where each agent only trusts other agents to be able to distinguish between certain states. We use this notion of trust as a precursor to belief revision, transforming reported information so that we only revise by the part that is trusted to be correct. This is a form of selective revision (Ferm e & Hansson, 1999); we prove that our trust-sensitive revision operators can be characterized by a natural set of rationality postulates. We establish key properties of our trust-senstive revision operator, and formally introduce the notion of manipulability. This paper is an extended version of our previous work (Hunter & Booth, 2015).
Researcher Affiliation Academia Richard Booth EMAIL CardiffUniversity Cardiff, UK Aaron Hunter EMAIL British Columbia Institute of Technology Burnaby, Canada
Pseudocode No The paper focuses on theoretical definitions, propositions, lemmas, and a main theorem, characterizing a class of trust-sensitive revision operators through a set of postulates and proofs. It does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper presents a theoretical framework for trust-sensitive belief revision, including definitions, examples, and proofs. It does not mention any release of source code, nor does it provide links to repositories or supplementary materials containing code.
Open Datasets No The paper presents a theoretical model and uses a simple motivating 'patient and doctor' example (Example 1) to illustrate concepts. It does not use or refer to any publicly available or open datasets for experimental validation.
Dataset Splits No The paper is theoretical and does not involve experiments using datasets, thus no dataset split information is provided.
Hardware Specification No The paper describes a theoretical framework and does not report on experimental results that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any computational implementation or experimental setup that would require listing specific software dependencies with version numbers.
Experiment Setup No The paper focuses on the theoretical development and properties of trust-sensitive belief revision operators, including proofs and postulates. It does not describe any practical experiments or their setup, therefore no specific experimental setup details like hyperparameters or training configurations are provided.