Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model

Authors: Yifan Duan, Jian Zhao, pengcheng , Junyuan Mao, Hao Wu, Jingyu Xu, shilong wang, Caoyuan Ma, Kai Wang, Kun Wang, Xuelong Li

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on five real-world ST benchmarks demonstrate that integrating the Ca Paint concept allows models to achieve improvements ranging from 3.7% 77.3%.
Researcher Affiliation Collaboration Yifan Duan1 , Jian Zhao2 , pengcheng5, Junyuan Mao1 , Hao Wu1, Jingyu Xu3, Shilong Wang1, Caoyuan Ma3, Kai Wang4, Kun Wang6 , Xuelong Li2 1University of Science and Technology of China, 2Tele AI, China Telecom, 3Wuhan University, 4National University of Singapore, 5Beijing Forestry University, 6Nanyang Technological University
Pseudocode Yes A Ca Paint Inpainting Algorithm Algorithm 1 Causal Intervention with Diffusion Inpainting
Open Source Code Yes Our project is available at Ca Paint.
Open Datasets Yes Datasets. We extensively evaluate our proposal using a diverse range of benchmark datasets spanning multiple fields, include Fire Sys [7], SEVIR [79], Diffusion reaction system (DRS) [6], KTH [58] and Taxi BJ+ [37].
Dataset Splits No The paper discusses using 'varying proportions of training data' but does not explicitly state the dataset splits for training, validation, and test sets with percentages or absolute counts.
Hardware Specification Yes All experiments are conducted on hardware equipped with 24 NVIDIA Ge Force RTX 4090 GPUs.
Software Dependencies No The paper mentions 'The optimizer used is Adam' and provides learning rates and batch sizes for models, but it does not list specific version numbers for software libraries or environments (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes The optimizer used is Adam, and different learning rates (LR) and batch sizes are set for each model. The specific parameter settings are shown in the table below: Model Learning Rate (LR) Batch Size CLSTM 0.001 8 MAU 0.001 8 MMVP 0.004 4 Pred RNNv2 0.001 8 Sim VP 0.004 4 Vi T 0.004 4 Earthfarsser 0.001 8