Geometric Trajectory Diffusion Models

Authors: Jiaqi Han, Minkai Xu, Aaron Lou, Haotian Ye, Stefano Ermon

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on both unconditional and conditional generation in various scenarios, including physical simulation, molecular dynamics, and pedestrian motion. Empirical results on a wide suite of metrics demonstrate that Geo TDM can generate realistic geometric trajectories with significantly higher quality.
Researcher Affiliation Academia Jiaqi Han, Minkai Xu, Aaron Lou, Haotian Ye, Stefano Ermon Stanford University
Pseudocode Yes Algorithm 1 Training Procedure of Geo TDM-uncond Algorithm 2 Sampling Procedure of Geo TDM-uncond Algorithm 3 Training Procedure of Geo TDM-cond Algorithm 4 Sampling Procedure of Geo TDM-cond
Open Source Code Yes Code is available at https://github.com/ hanjq17/Geo TDM.
Open Datasets Yes We employ the MD17 [5] dataset, which contains the DFT-simulated molecular dynamics (MD) trajectories of 8 small molecules... We apply our model to ETH-UCY [35, 28] dataset, a challenging and large-scale benchmark for pedestrian trajectory forecasting.
Dataset Splits Yes For all three datasets, we use 3000 trajectories for training, 2000 for validation, and 2000 for testing. For each molecule, we construct a training set of 5000 trajectories, and 1000/1000 for validation and testing, uniformly sampled along the time dimension.
Hardware Specification Yes We use Distributed Data Parallel on 4 Nvidia A6000 GPUs to train all the models. Our CPUs were standard intel CPUs.
Software Dependencies No The paper mentions using an "Adam optimizer" and refers to implementations from other papers [43], [11], but does not list specific software libraries (like PyTorch, TensorFlow, or specific GNN frameworks) with their version numbers required for reproducibility.
Experiment Setup Yes We provide the detailed hyper-parameters of Geo TDM in Table 7. We adopt Adam optimizer with betas (0.9, 0.999) and ϵ = 10 8. For all experiments, we use the linear noise schedule [18] with βstart = 0.02 and βend = 0.0001.