Front-to-End Bidirectional Heuristic Search with Near-Optimal Node Expansions

Authors: Jingwei Chen, Robert C. Holte, Sandra Zilles, Nathan R. Sturtevant

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that NBS competes with or outperforms existing bidirectional search algorithms, and often outperforms A* as well.
Researcher Affiliation Academia Jingwei Chen University of Denver Denver, CO, USA jingchen@cs.du.edu; Robert C. Holte University of Alberta Edmonton, AB, Canada rholte@ualberta.ca; Sandra Zilles University of Regina Regina, SK, Canada zilles@uregina.ca; Nathan R. Sturtevant University of Denver Denver, CO, USA sturtevant@cs.du.edu
Pseudocode Yes The pseudocode for NBS is shown in Algorithms 1 and 2. [...] Algorithm 3 NBS pseudocode for selecting the best pair from Open list.
Open Source Code No The paper does not provide any statement or link indicating the public availability of the source code for the described methodology.
Open Datasets Yes In Table 1 we present results on problems from four different domains, including grid-based pathfinding problems [Sturtevant, 2012] ( brc maps from Dragon Age: Origins (DAO)), random 4-peg Tower of Hanoi (TOH) problems, random pancake puzzles, and the standard 15 puzzle instances [Korf, 1985].
Dataset Splits No The paper mentions using standard benchmark problems but does not explicitly provide specific details on training, validation, and test dataset splits (e.g., percentages, counts, or explicit standard split names).
Hardware Specification No The paper does not provide specific details on the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) used in the experiments.
Experiment Setup No The paper does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings used in the experiments.