Searching with Consistent Prioritization for Multi-Agent Path Finding

Authors: Hang Ma, Daniel Harabor, Peter J. Stuckey, Jiaoyang Li, Sven Koenig7643-7650

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a variety of empirical comparisons, we demonstrate state-of-the-art solution qualities and success rates, often with similar runtimes to existing algorithms. We also develop new theoretical results that explore the limitations of prioritized planning, in terms of completeness and optimality, for the first time.
Researcher Affiliation Academia 1University of Southern California 2Monash University hangma@usc.edu, {daniel.harabor,peter.stuckey}@monash.edu, {jiaoyanl,skoenig}@usc.edu
Pseudocode Yes Algorithm 1 shows the high-level search of CBSw/P. Algorithm 2 shows the high-level search of PBS.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We also use MAPF instances on two standard benchmark maps brc202d (a 481x530 four-neighbor grid) and lak503d (a 192x192 four-neighbor grid) of the game Dragon Age: Origins (Sturtevant 2012).
Dataset Splits No The paper does not provide specific dataset split percentages, sample counts, or explicit methodologies for creating training, validation, and test splits. It mentions "different randomly generated start and target vertices" which describes data generation, not splitting a dataset.
Hardware Specification Yes All experiments were run on a 2.50 GHz Intel Core i5-2450M laptop with 6 GB RAM
Software Dependencies No The paper does not provide specific software names along with their version numbers (e.g., Python 3.x, PyTorch 1.x, CPLEX 12.x) that are needed to replicate the experiment.
Experiment Setup Yes All experiments were run on a 2.50 GHz Intel Core i5-2450M laptop with 6 GB RAM and a runtime limit of one minute for each MAPF instance for each algorithm except that RND was given a runtime limit of one minute for each of its ten runs. We repeated each experiment 50 times for each number of agents using different randomly generated start and target vertices for the agents, and report the mean values.