Learning to Resolve Conflicts for Multi-Agent Path Finding with Conflict-Based Search

Authors: Taoan Huang, Sven Koenig, Bistra Dilkina11246-11253

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmark maps indicate that our approach, ML-guided CBS, significantly improves the success rates, search tree sizes and runtimes of the current state-of-the-art CBS solver.
Researcher Affiliation Academia Taoan Huang, Sven Koenig, Bistra Dilkina University of Southern California {taoanhua, skoenig, dilkina}@usc.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a specific link or explicit statement about the open-source release of the code for the methodology described in this paper.
Open Datasets Yes We use a set of six four-neighbor grid maps M of different sizes and structures as the graphs underlying the instances and evaluate our algorithms on them. M includes (1) a warehouse map (Li et al. 2020); (2) the room map room-32-32-4 (Stern et al. 2019); (3) the maze map maze-128-128-2 (Stern et al. 2019); (4) the random map; (5) the city map Paris 1 256 (Stern et al. 2019); (6) the game map.
Dataset Splits No The paper states 'We obtain two sets of instances, a training dataset ITrain and a test dataset ITest' for data collection and model learning, and similar splits for experimental evaluation. However, it does not explicitly define a separate 'validation' dataset or split.
Hardware Specification Yes The experiments are conducted on 2.4 GHz Intel Core i7 CPUs with 16 GB RAM.
Software Dependencies No The paper mentions using an 'open-source software package (Joachims 2006) that implements a Support Vector Machine (SVM) approach (Joachims 2002)' and 'C++ code for CBSH2 with the WDG heuristic made available by Li et al. (2019a)', but it does not provide specific version numbers for these software components.
Experiment Setup Yes We set the regularization parameter C = 1/100 to train an SV M rank (Joachims 2002) with a linear kernel to obtain each of the ranking functions. We varied C {1/10, 1/100, 1/1000} and achieved similar results.