Discrete-state Continuous-time Diffusion for Graph Generation

Authors: Zhe Xu, Ruizhong Qiu, Yuzhong Chen, Huiyuan Chen, Xiran Fan, Menghai Pan, Zhichen Zeng, Mahashweta Das, Hanghang Tong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on plain and molecule graphs show that DISCO can obtain competitive or superior performance against state-of-the-art graph generative models and provide additional sampling flexibility.
Researcher Affiliation Collaboration University of Illinois Urbana-Champaign. {zhexu3, rq5, zhichenz, htong}@illinois.edu Visa Research. {yuzchen, hchen, xirafan, menpan, mahdas}@visa.com
Pseudocode Yes Algorithm 1 Training of DISCO and Algorithm 2 τ-Leaping Graph Generation are provided.
Open Source Code Yes Our code is released 3. https://github.com/pricexu/Disco
Open Datasets Yes Datasets SBM, Planar [51], and Community [82] are used... The datasets QM9 [62], MOSES [58], and Guaca Mol [6] are chosen.
Dataset Splits Yes We follow the settings of SPECTRE [51] and Di Gress [73] to split the SBM, Planar [51], and Community [82] datasets into 64/16/20% for training/validation/test set.
Hardware Specification Yes All the efficiency study results are from one NVIDIA Tesla V100 SXM2-32GB GPU on a server with 96 Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz processors and 1.5T RAM.
Software Dependencies No The paper mentions implementing DISCO in Py Torch and Py Torch-geometric but does not provide specific version numbers for these software dependencies in the text.
Experiment Setup Yes For both variants, the dropout is set as 0.1, the learning rate is set as 2e 4, and the weight decay is set as 0.