Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Prioritised Planning: Completeness, Optimality, and Complexity
Authors: Jonathan Morag, Yue Zhang, Daniel Koyfman, Zhe Chen, Ariel Felner, Daniel Harabor, Roni Stern
JAIR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with our algorithms in a range of settings, including comparisons with existing PP baselines. Our results show how the different degrees of freedom of PP-based algorithms affect their behaviour, and provide the first-known results for solution-quality optimality for PP-based algorithms on a popular MAPF benchmark set. |
| Researcher Affiliation | Academia | JONATHAN MORAG , Ben-Gurion University of the Negev, Israel YUE ZHANG, Monash University, Australia DANIEL KOYFMAN, Ben-Gurion University of the Negev, Israel ZHE CHEN, Monash University, Australia ARIEL FELNER, Ben-Gurion University of the Negev, Israel DANIEL HARABOR, Monash University, Australia RONI STERN, Ben-Gurion University of the Negev, Israel |
| Pseudocode | Yes | Algorithm 1 Pa PS Algorithm 2 Expand On Agents Algorithm 3 Expand On Conflict Algorithm 4 Generate |
| Open Source Code | Yes | Our code and data are available online at: www.github.com/J-morag/MAPF /releases/tag/25.JAIR.PP. |
| Open Datasets | Yes | We experimented using a grid-based MAPF benchmark from Stern et al. (2019). |
| Dataset Splits | No | The paper describes how problem instances were generated and varied (e.g., 'For each map, we used 25 different scenarios, each with 8 MAPF problem instances. In each instance, we varied the number of agents from 5 to 40, increasing by 5.'), but it does not specify explicit training/test/validation dataset splits for model training or evaluation in the conventional sense. |
| Hardware Specification | Yes | We ran our experiments on Linux virtual environments, on a cluster of AMD EPYC 7763 CPUs, with 16GB RAM each. |
| Software Dependencies | Yes | We used Open Java Runtime 18 (JRE 18) to run our code. |
| Experiment Setup | No | The paper describes various algorithms and their comparisons in terms of runtime and solution quality across different MAPF problem instances. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or system-level training settings typically found in experimental setups for models. |