Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Regarding Goal Bounding and Jump Point Search

Authors: Yue Hu, Daniel Harabor, Long Qin, Quanjun Yin

JAIR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our ideas in three distinct experiments: (i) partial goal bounding with Geometric Containers; (ii) partial goal bounding with Compressed Path Databases; (iii) performance comparisons vs. a variety of baseline algorithms on the benchmarks from Sturtevant s repository (Sturtevant, 2012) and full set of benchmark problems appearing at the 2014 Grid-based Path Planning Competition2. ... In Table 3 we compare pathfinding performance across several distinct metrics: query time, heap operations and the size of the Open list.
Researcher Affiliation Academia Yue Hu EMAIL College of Systems Engineering National University of Defense Technology Kaifu District, Changsha, Hunan, China. Daniel Harabor EMAIL Monash University Melbourne, Australia. Long Qin EMAIL Quanjun Yin EMAIL College of Systems Engineering National University of Defense Technology Kaifu District, Changsha, Hunan, China.
Pseudocode Yes Algorithm 1: Identify independent diagonal jump points on a grid map. ... Algorithm 6: Path extraction for Topping+.
Open Source Code No Our implementations are based on codes made freely available; by the original authors of JPS+BB (Rabin & Sturtevant, 2016) (available from Git Hub3) and by the original authors of SRC (Strasser et al., 2014) (available from the GPPC-14 organisers1). ... We use C++ codes made available by the original authors4. ... We use C++ codes made available by the original authors5. Explanation: The paper states that their implementations are based on codes *made freely available by original authors* of prior works, and provides links to those. It also mentions using C++ codes *made available by the original authors* of baseline algorithms CH-SG and CH-JP. However, it does not explicitly state that *the authors of this paper* are releasing the source code for *their own new methodologies* (JPS+BB+, TOPS, Topping+).
Open Datasets Yes We evaluate our ideas ... on the benchmarks from Sturtevant s repository (Sturtevant, 2012) and full set of benchmark problems appearing at the 2014 Grid-based Path Planning Competition2. Footnote 2: https://movingai.com
Dataset Splits No The set of maps and instances used in our experiments are summarised in Table 1. All codes are compiled with GCC 4.8.5 and all experiments are run on Intel Xeon(R) CPU E5-2678 v3 @ 2.50GHz 48 with 64 GB of RAM. Explanation: The paper describes the benchmark sets and the number of instances for evaluation. However, it does not specify any training/test/validation splits for these instances, which are typically evaluated as individual pathfinding queries rather than using traditional machine learning splits.
Hardware Specification Yes All codes are compiled with GCC 4.8.5 and all experiments are run on Intel Xeon(R) CPU E5-2678 v3 @ 2.50GHz 48 with 64 GB of RAM.
Software Dependencies Yes All codes are compiled with GCC 4.8.5 and all experiments are run on Intel Xeon(R) CPU E5-2678 v3 @ 2.50GHz 48 with 64 GB of RAM. ... Our implementations are based on codes made freely available; by the original authors of JPS+BB (Rabin & Sturtevant, 2016) ... and by the original authors of SRC (Strasser et al., 2014) ...
Experiment Setup Yes Our implementations are based on codes made freely available... We also follow their suggested optimisations: (1) we use buckets indexed by cost, which speeds up sorting of candidates on the Open list to accelerate Dijkstra search. (2) we enhance the Open list, implemented as a binary heap, with a hash function, which speeds up membership tests and update operations. (3) we use a function-pointer lookup table, which eliminates some conditional instructions and speeds up the identification of canonical moves (Sturtevant et al., 2015). ... we run each online experiment as a single thread in a single core when no threads of other users are running in the machine, in case that interruptions and shared cache create problems. In order to mitigate the impacts of CPU timing variations, we run each pathfinding instance for five times.