Multi-agent Epistemic Planning with Common Knowledge

Authors: Qiang Liu, Yongmei Liu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Implementation and Experimental Results Based on our reasoning, revision and update algorithms, we implemented a planner called MEPC for multi-agent epistemic planning with common knowledge. Our planning algorithm supports contingent planning by extending breadthfirst search to AND/OR graphs. We evaluate MEPC with five domains which use common knowledge in different ways. [...] The results are shown in Table 1.
Researcher Affiliation Academia Qiang Liu and Yongmei Liu Dept. of Computer Science, Sun Yat-sen University, Guangzhou 510006, China liuq226@mail2.sysu.edu.cn, ymliu@mail.sysu.edu.cn
Pseudocode Yes Algorithm 1: Check K(φ) input: φ LKC is in DNF W output: / set G = (W, R) to the empty graph; if Sat K(φ, , a, 1) = then return ; call Update; foreach δ in do if Mark(δ) = then return ; return ;
Open Source Code No The paper states 'We implemented a multi-agent epistemic planner with common knowledge called MEPC' but does not provide any link or explicit statement about open-sourcing the code.
Open Datasets Yes We illustrate the framework with the classic muddy children example [Fagin et al., 1995].
Dataset Splits No The paper describes problem domains (e.g., Muddy Children) and tests the planner's ability to solve them, but it does not involve traditional dataset splits for training, validation, and testing as seen in machine learning contexts.
Hardware Specification Yes Our experiments were run on a Windows machine with 3.50GHz CPU and 8GB RAM.
Software Dependencies No The paper describes the implemented planner MEPC and its algorithms but does not provide specific software dependencies with version numbers.
Experiment Setup No The paper describes the planning algorithm and its underlying logical components but does not specify experimental setup details such as hyperparameters, learning rates, or specific training configurations typical in machine learning.