Traffic Flow Optimisation for Lifelong Multi-Agent Path Finding

Authors: Zhe Chen, Daniel Harabor, Jiaoyang Li, Peter J. Stuckey

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the idea in two large-scale settings: oneshot MAPF, where each agent has a single destination, and lifelong MAPF, where agents are continuously assigned new destinations. Empirically, we report large improvements in solution quality for one-short MAPF and in overall throughput for lifelong MAPF.
Researcher Affiliation Academia 1Department of Data Science and Artificial Intelligence, Monash University, Melbourne, Australia 2Robotics Institute, Carnegie Mellon University, Pittsburgh, USA 3OPTIMA Australian Research Council ITTC, Melbourne, Australia
Pseudocode Yes Algorithm 1: PIBT. In each iteration Plan Step computes a next move θ for each agent a A, currently at position ϕ, using a priority ordering p.
Open Source Code Yes Implementations1 are written in C++... 1https://github.com/nobodyczcz/Guided-PIBT
Open Datasets No Our maps are: Warehouse: a 500 140 synthetic fulfillment center map... Sortation: a 33 57 synthetic sortation centre map... Game: ost003d, a 194 194 map... Room: room-64-64-8, a 64 64 synthetic map... (No direct link, DOI, or explicit citation provided for these map sources as public datasets.)
Dataset Splits No The paper does not provide specific train/validation/test dataset splits for reproduction.
Hardware Specification Yes Implementations1 are written in C++ and evaluated on a Nectar Cloud VM instance with 32 AMD EPYC-Rome CPUs and 64 GB RAM.
Software Dependencies No Implementations1 are written in C++ and evaluated on a Nectar Cloud VM instance with 32 AMD EPYC-Rome CPUs and 64 GB RAM. (Only the programming language is mentioned, not specific libraries with version numbers.)
Experiment Setup Yes For Lifelong MAPF, the maximum simulation time is based on the size of the map: we compute a maximum number of timesteps as (width + height) * 5 with the intention that each agent has the opportunity to complete approximately 5 tasks. In lifelong MAPF experiments, planners have 10 seconds to return actions for all agents at every timestep. In one-shot MAPF experiments, planners have 60 seconds timelimit.