Guidance Graph Optimization for Lifelong Multi-Agent Path Finding
Authors: Yulun Zhang, He Jiang, Varun Bhatt, Stefanos Nikolaidis, Jiaoyang Li
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that (1) our guidance graphs improve the throughput of three representative lifelong MAPF algorithms in eight benchmark maps, and (2) our update model can generate guidance graphs for as large as 93 91 maps and as many as 3,000 agents. |
| Researcher Affiliation | Academia | Yulun Zhang1 , He Jiang1 , Varun Bhatt2 , Stefanos Nikolaidis2 and Jiaoyang Li1 1Robotics Institute, Carnegie Mellon University 2Thomas Lord Department of Computer Science, University of Southern California |
| Pseudocode | Yes | Figure 3 gives an overview of PIU, and Algorithm 1 provides the pseudocode. |
| Open Source Code | Yes | We include the source code at: https://github.com/lunjohnzhang/ ggo_public. |
| Open Datasets | Yes | Table 2 outlines our experimental setup. Column 3 outlines the maps, all being 4-neighbor girds, including two warehouse maps (warehouse-33-36 and warehouse-20-17) from previous works [Li et al., 2021; Zhang et al., 2023b] and six maps (random-32-32-20, maze32-32-4, empty-48-48, room-64-64-8, random-64-64-20, and den312d) from the MAPF benchmark [Stern et al., 2019]. |
| Dataset Splits | No | The paper does not explicitly describe training, validation, and test splits for a dataset in the traditional machine learning sense. The 'update model' is optimized via CMA-ES, which is an evolutionary strategy, and the evaluation is done through simulations, not on predefined data splits. |
| Hardware Specification | Yes | We run our experiments on two machines: (1) a local machine with a 64-core AMD Ryzen Threadripper 3990X CPU, 192 GB of RAM, and an Nvidia RTX 3090Ti GPU, and (2) a high-performing cluster with numerous 64-core AMD EPYC 7742 CPUs, each with 256 GB of RAM. |
| Software Dependencies | No | The paper mentions software used (PyTorch, Pyribs, Python, C++, PBS, SIPP) but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We choose the hyperparameters of CMA-ES and PIU such that they run the same number of simulations to ensure a fair comparison. In particular, we set batch size b = 100 and the number of iterations I = 100 for both CMA-ES and PIU... In each iteration, we select the top Nbest = 50 solutions to update the Gaussian distribution. For CMA-ES, each evaluation runs Ne cma = 5 simulations... For PIU, each evaluation runs the PIU algorithm for Np = 5 iterations and each iteration runs Ne piu = 1 simulation... We run each simulation for 1,000 timesteps. ... we use PBS [Ma et al., 2019] and SIPP [Phillips and Likhachev, 2011] as the MAPF solver and the single-agent solver, respectively, in both RHCR and DPP and use h = 5 and w = 10 in RHCR. |