Hybrid Search for Efficient Planning with Completeness Guarantees

Authors: Kalle Kujanpää, Joni Pajarinen, Alexander Ilin

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply the proposed search method to a recently proposed subgoal search algorithm and evaluate the algorithm trained on offline data on complex planning problems. We demonstrate that our complete subgoal search not only guarantees completeness but can even improve performance in terms of search expansions for instances that the high-level could solve without low-level augmentations.
Researcher Affiliation Collaboration Kalle Kujanpää1,3 , Joni Pajarinen2,3, Alexander Ilin1,3,4 1Department of Computer Science, Aalto University 2Department of Electrical Engineering and Automation, Aalto University 3Finnish Center for Artificial Intelligence FCAI 4System 2 AI {kalle.kujanpaa,joni.pajarinen,alexander.ilin}@aalto.fi
Pseudocode No No explicitly labeled 'Algorithm' or 'Pseudocode' blocks were found.
Open Source Code No The paper does not provide a specific link to source code for the methodology or explicitly state that the code is publicly available.
Open Datasets Yes We use the four environments considered in [18]: Box-World [44], Sliding Tile Puzzle [17], Gym-Sokoban [37] and Travelling Salesman (see Fig. 2).
Dataset Splits No The paper does not specify exact dataset splits (e.g., percentages or sample counts) for training, validation, or testing.
Hardware Specification No The paper mentions 'computational resources provided by the Aalto Science-IT project and CSC, Finnish IT Center for Science' but does not specify exact hardware details such as GPU/CPU models or memory.
Software Dependencies No The paper mentions using 'Res Net-based network [10]' and 'Adam [16]' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup No The paper discusses the hyperparameter ε and its impact but does not provide specific values for common training hyperparameters such as learning rate, batch size, or number of epochs.