Euclidean Pathfinding with Compressed Path Databases

Authors: Bojie Shen, Muhammad Aamir Cheema, Daniel Harabor, Peter J. Stuckey

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a range of experiments and empirical comparisons we show that: (i) the auxiliary data structures required by the new method are cheap to build and store; (ii) for optimal search, the new algorithm is faster than a range of recent ESPP planners, with speedups ranging from several factors to over one order of magnitude; (iii) for anytime search, where feasible solutions are needed fast, we report even better runtimes. We run experiments on a variety of grid map benchmarks which are described in [Sturtevant, 2012], including 373 game maps from four sets of maps: DAO (156), DA (67), BG (75), SC (75).
Researcher Affiliation Academia Bojie Shen , Muhammad Aamir Cheema , Daniel D. Harabor and Peter J. Stuckey Faculty of Information Technology, Monash University, Melbourne, Australia {bojie.shen, aamir.cheema, daniel.harabor, peter.stuckey}@monash.edu
Pseudocode Yes Algorithm 1: End Point Search (EPS)
Open Source Code No The paper states 'We implemented our algorithm in C++', but it does not provide a specific repository link, explicit code release statement, or indicate code availability in supplementary materials for the methodology described in this paper. It does provide links to code for *comparison* algorithms (e.g., Polyanya, ENLSVG).
Open Datasets Yes We run experiments on a variety of grid map benchmarks which are described in [Sturtevant, 2012], including 373 game maps from four sets of maps: DAO (156), DA (67), BG (75), SC (75). All benchmarks are available from the HOG2 online repository.1 https://github.com/nathansttt/hog2
Dataset Splits No The paper uses pre-existing grid map benchmarks but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification Yes All the experiments are performed on a 2.6 GHz Intel Core i7 machine with 16GB of RAM and running OSX 10.14.6.
Software Dependencies No The paper mentions 'We implemented our algorithm in C++' and 'running OSX 10.14.6', but it does not provide specific ancillary software details with version numbers (e.g., specific libraries or solvers used in the C++ implementation).
Experiment Setup No The paper describes the general computing environment for experiments ('2.6 GHz Intel Core i7 machine with 16GB of RAM and running OSX 10.14.6', 'implemented in C++') but does not provide specific experimental setup details such as hyperparameter values, model initialization, or specific training configurations typical for machine learning models.