Diagnosability Planning for Controllable Discrete Event Systems

Authors: Hassan Ibrahim, Philippe Dague, Alban Grastien, Lina Ye, Laurent Simon

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have created a special benchmark and tested three proposed methods, according to the recycling level of twin plants construction, with one cost function used for plan optimality and an optional heuristics. We present experimental results on a scalable benchmark.
Researcher Affiliation Academia Hassan Ibrahim,1 Philippe Dague,1 Alban Grastien,2 Lina Ye,1 Laurent Simon3 1 LRI, Univ. Paris-Sud and CNRS, Univ. Paris-Saclay, Orsay, France Email: firstname.name@lri.fr 2 Data61 and Australian National University, Canberra, Australia Email: alban.grastien@data61.csiro.au 3 La BRI, Univ. Bordeaux and CNRS, Bordeaux, France Email: lsimon@labri.fr
Pseudocode Yes Algorithm 1 General algorithm
Open Source Code No Experimental results demonstrated the efficiency of the approach by exploiting the different learning strategies (code and benchmark are available upon request to the first author).
Open Datasets No In order to test our proposed approaches on a benchmark, we created a rectangular grid of components by repeating the active model (i.e., without its actions) of our running example in Figure 1 and we reconfigured the actions in all components and added global actions between components. code and benchmark are available upon request to the first author.
Dataset Splits No The paper describes generating a benchmark (a rectangular grid of components) and testing methods on it, but it does not specify train, validation, or test dataset splits in terms of percentages, sample counts, or predefined standard splits typically used for model training and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, cloud resources) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., Python 3.x, PyTorch 1.x, specific solvers with versions) used for the experiments.
Experiment Setup Yes We have created a special benchmark and tested three proposed methods, according to the recycling level of twin plants construction, with one cost function used for plan optimality and an optional heuristics. Let call g(π) the cost of plan π, which is the sum of its elementary actions costs (e.g., its length if all actions have cost 1), that we want to minimize. The idea is to order the possible plans according not only to their cost but also to the chance of reaching a diagnosable belief state, heuristically evaluated by the ratio of good pairs in this belief state, denoted by h(π).