Choosing the Initial State for Online Replanning
Authors: Maximilian Fickert, Ivan Gavran, Ivan Fedotov, Jörg Hoffmann, Rupak Majumdar, Wheeler Ruml12311-12319
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that this approach yields better behavior than either guessing a constant or trying to predict replanning time in advance. By making replanning more effective and easier to implement, this work aids in creating planning systems that can better handle the inevitable exigencies of real-world execution. |
| Researcher Affiliation | Academia | 1 Saarland University, Saarland Informatics Campus, Saarbr ucken, Germany 2 Max Planck Institute for Software Systems, Kaiserslautern, Germany 3 Department of Computer Science, University of New Hampshire, USA |
| Pseudocode | Yes | Algorithm 1 shows the pseudocode of the MIST algorithm. |
| Open Source Code | No | The paper states, 'We implemented MIST in Fast Downward (Helmert 2006).' but does not provide any link or explicit statement about releasing the source code for their specific MIST implementation. Fast Downward is a third-party planner. |
| Open Datasets | No | The paper states, 'We adapted the IPC domains Elevators, Logistics, Rovers, Transport, and Visit All to our setting...' and 'We furthermore experiment with Tidybot, which we adapted to satisfy (ii).' While IPC domains are generally public, the paper does not provide a specific link, DOI, or formal citation (with authors and year) for the *adapted* versions of these datasets used in their experiments. Access to their specific experimental data is not concrete. |
| Dataset Splits | No | The paper describes how they generated instances by splitting goals and scheduling their appearance relative to initial plan execution ('The instances were adapted by splitting the set of goals in two...'; 'the second set of goals appears after 10% of the initial plan is executed.'). However, it does not specify explicit training, validation, or test dataset splits (e.g., in percentages or counts) in the typical machine learning sense. |
| Hardware Specification | Yes | The experiments were run on a cluster of 2.20 GHz Intel Xeon E5-2660 CPUs. |
| Software Dependencies | No | The paper mentions implementing MIST in 'Fast Downward (Helmert 2006)' and using the 'popular FF heuristic (Hoffmann and Nebel 2001)'. However, it does not provide specific version numbers for Fast Downward, the FF heuristic, or any other software libraries, which is necessary for reproducibility. |
| Experiment Setup | Yes | In our implementation, we use a standard A open list for each reference state, using the MIST extensions to the f-function only to select the open list to be used for the next expansion to avoid having to re-sort the open list. [...] For the expansion delay, we use a moving average over the last 100 expansions. [...] The overall best results are obtained with n R = 8 for MIST and n R = 9 for MISTrec, and we use these settings for the remaining experiments. [...] The time and memory limits were set to 30 minutes and 4 GB, respectively. |