Flexible and Scalable Partially Observable Planning with Linear Translations
Authors: Blai Bonet, Hector Geffner
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We implemented LW1 on top of the K-replanner. For comparing the two planners, we also added a front-end to the K-replanner so that it can handle the syntax of contingent benchmarks. For LW1, we converted these benchmarks by hand into the syntax based on multivalued state and observable variables. The experiments were performed using the classical planner FF (Hoffmann and Nebel 2001) on a cluster made of AMD Opteron 6378 CPUs with 4Gb of memory, running at 2.4 Ghz. The third on-line planner considered in the experiments is HCP; we took the data from Shani et al. (2014), where HCP is compared with CLG, SDR, and MPSR. HCP can be understood as extending the K-replanner with a subgoaling mechanism: in each replanning episode, rather than planning for the top goal, the planner looks for a closer subgoal; namely, the preconditions of a first sensing action that is selected heuristically. Table 1 compares LW1, the K-replanner with the front end, and HCP. |
| Researcher Affiliation | Academia | Blai Bonet Universidad Sim on Bol ıvar Caracas, Venezuela bonet@ldc.usb.ve Hector Geffner ICREA & DTIC Universitat Pompeu Fabra 08018 Barcelona, Spain hector.geffner@upf.edu |
| Pseudocode | No | The paper does not include a figure, block, or section labeled 'Pseudocode' or 'Algorithm', nor any structured steps formatted like code. |
| Open Source Code | No | The paper states 'We implemented LW1 on top of the K-replanner' but does not provide any link to its source code or explicitly state that it is open-sourced. |
| Open Datasets | No | The paper mentions 'existing benchmarks' and specific domain names (e.g., 'clog', 'colorballs', 'Minesweeper') and references a 'problem generator by Bonet and Geffner (2013)' but does not provide specific links, DOIs, or detailed formal citations (with author names and year in brackets/parentheses for direct access) to the datasets used for the experiments. It does not provide concrete access information. |
| Dataset Splits | No | The paper does not specify exact split percentages, absolute sample counts for each split, or reference predefined splits with citations for training, validation, and test datasets. It mentions 'randomly generated hidden initial states' but not a formal data split strategy for validation. |
| Hardware Specification | Yes | The experiments were performed using the classical planner FF (Hoffmann and Nebel 2001) on a cluster made of AMD Opteron 6378 CPUs with 4Gb of memory, running at 2.4 Ghz. |
| Software Dependencies | No | The paper mentions 'the classical planner FF (Hoffmann and Nebel 2001)' but does not provide a specific version number for FF or any other key software components used in the experiments. |
| Experiment Setup | No | The paper describes the experimental environment (e.g., using FF as the classical planner) and the domains, but it does not provide specific hyperparameter values or detailed system-level training settings such as learning rates, batch sizes, optimizer configurations, or training schedules. |