GRAPH-CONSTRAINED DIFFUSION FOR END-TO-END PATH PLANNING

Authors: Dingyuan Shi, Yongxin Tong, Zimu Zhou, Ke Xu, Zheng Wang, Jieping Ye

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the efficacy of GDP on two real-world datasets. Our GDP beats strong baselines by 14.2% 43.5% and achieves state-of-the-art performances.
Researcher Affiliation Collaboration Dingyuan Shi Beihang University chnsdy@buaa.edu.cn Yongxin Tong Beihang University yxtong@buaa.edu.cn Zimu Zhou City University of Hong Kong zimuzhou@cityu.edu.hk Ke Xu Beihang University kexu@buaa.edu.cn Zheng Wang Independent Researcher wangzheng04@gmail.com Jieping Ye University of Michigan jieping@gmail.com jpye@umich.edu
Pseudocode Yes Algorithm 1: Training Input: Dataset P, model nnθ Output: model nnθ
Open Source Code Yes Our source code can be found at https://github.com/sdycodes/Graph-Diffusion-Planning
Open Datasets Yes We use two real datasets city A and city B from Didi Gaia 1. Please refer to (App. F.1) for preprocess, parameter setting and other implementation detail. 1https://gaia.didichuxing.com
Dataset Splits No The paper mentions using 'test datasets' and 'test OD pairs for validation' but does not specify explicit train/validation/test splits with percentages or sample counts for reproducibility.
Hardware Specification Yes We train our model for 20 epochs on an Nvidia Ge Force RTX 3090 with 24 Gi B memory.
Software Dependencies No The paper describes the model training and provides a link to source code, but it does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes For diffusion process, we set T to 100 and the β linearly increase from 0.0001 to 10. As for the training process, we choose the batch size as 16 and set the learning rate to 0.005. As for the Gaussian mixture model, we set 5 components.