Structured Possibilistic Planning Using Decision Diagrams

Authors: Nicolas Drougard, Florent Teichteil-Königsbuch, Jean-Loup Farges, Didier Dubois

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that PPUDD s computation time is much lower than SPUDD, Symbolic-HSVI and APPL for possibilistic and probabilistic versions of the same benchmarks under either total or mixed observability, while still providing high-quality policies.
Researcher Affiliation Collaboration Nicolas Drougard, Florent Teichteil-K onigsbuch Jean-Loup Farges Onera The French Aerospace Lab 2 avenue Edouard Belin 31055 Toulouse Cedex 4, France Didier Dubois IRIT Paul Sabatier University 118 route de Narbonne 31062 Toulouse Cedex 4, France
Pseudocode Yes Algorithm 1: PPUDD
Open Source Code No The paper does not include any statement or link regarding the public availability of the source code for the described methodology.
Open Datasets Yes navigation domain used in planning competitions (Sanner 2011) ... Rocksample problem (RS) against a recent probabilistic MOMDP planner, APPL (Ong et al. 2010), and a POMDP planner using ADDs, symbolic HSVI (Sim et al. 2008).
Dataset Splits No The paper describes using specific benchmarks (navigation domain, Rocksample problem) but does not provide explicit details about train/validation/test dataset splits (e.g., percentages, sample counts, or specific split files).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) or mention specific solver versions.
Experiment Setup Yes In this domain, a robot navigates in a grid where it must reach some goal location most reliably. It can apply actions going north, east, south, west and stay which all cost 1 except on the goal... This probabilistic model is approximated by two possibilistic ones where: the preference of reaching the goal is 1; in the first model (M1) the highest probability of each Bernoulli distribution is replaced by 1 (for possibility normalization reasons) and the same value for the lowest probability is kept; for the second model (M2), the probability of disappearing is replaced by 1 and the other one is kept. ... Both algorithms are approximate and anytime, so we decided to stop them when they reach a precision of 1.