Verifiable Machine Ethics in Changing Contexts

Authors: Louise A. Dennis, Martin Mose Bentzen, Felix Lindner, Michael Fisher11470-11478

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework. Our implementation forms JUNO a cognitive agent based system that contains a HERA ethical reasoner which can be instantiated to reason using different ethical encodings. We show how a JUNO agent can be formally verified to ensure that each context preserves key user values. All the code and examples in this paper are available in the MCAPL distribution 1. For our case study we considered a simple safety property: In the case of a fire the agent will evacuate the house, expressed in AJPF s property specification language as: (B(fire) = D(evacuate)) We were able to automatically prove that the above property held in both our Utilitarian and Kantian agents.
Researcher Affiliation Academia Louise A. Dennis1, Martin Mose Bentzen2, Felix Lindner3, Michael Fisher1 1Department of Computer Science, University of Manchester, 2 Management Engineering, Technical University of Denmark, 3 Institute of Artificial Intelligence, Ulm University {louise.dennis, michael.fisher}@manchester.ac.uk, mmbe@dtu.dk, felix.lindner@uni-ulm.de
Pseudocode Yes Figure 2: JUNO s Operational Semantics. Each equation applies to one phase in the reasoning cycle shown in Figure 1. The expressions below the line indicate how the JUNO agent is transformed as a result of the transition, while above the line are given equations that instantiate variables, or side effects of the transition. (1) defines Perception; (2) defines Inference; (3) defines Update Model; (4) defines HERA reasoning; (5) defines Act; and (6) defines Clean Up.
Open Source Code Yes All the code and examples in this paper are available in the MCAPL distribution 1. https://autonomy-and-verification.github.io/tools/mcapl
Open Datasets No The paper describes a "Case Study" in a smart home context based on a previous paper (Bentzen et al. 2018), but it does not specify any dataset used for training with concrete access information (link, DOI, repository, or formal citation for the dataset itself).
Dataset Splits No The paper describes a formal framework and its implementation, which is evaluated through formal verification and a case study. It does not involve training, validation, and test splits of a dataset in the conventional machine learning sense, thus no specific dataset split information for validation is provided.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running experiments or formal verification were mentioned in the paper.
Software Dependencies No The paper mentions using the "Hybrid Ethical Reasoning Agents system (HERA)" and the "model-checking agent programming languages (MCAPL) framework," but it does not provide specific version numbers for these frameworks or any other ancillary software components (e.g., programming languages, libraries, or solvers).
Experiment Setup No The paper describes a formal framework, an implemented system, and a case study scenario. It does not include specific experimental setup details such as hyperparameter values, training configurations, or system-level settings relevant to a typical computational experiment (e.g., learning rates, batch sizes, epochs).