Autoregressive Diffusion Model for Graph Generation

Authors: Lingkai Kong, Jiaming Cui, Haotian Sun, Yuchen Zhuang, B. Aditya Prakash, Chao Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on six diverse generic graph datasets and two molecule datasets show that our model achieves better or comparable generation performance with previous state-of-the-art, and meanwhile enjoys fast generation speed.
Researcher Affiliation Academia 1School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, USA. Correspondence to: Lingkai Kong <lkkong@gatech.edu>.
Pseudocode Yes The detailed training procedure is summarized in Algorithm 1 in Appendix. A.6.
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of its own code.
Open Datasets Yes We evaluate the performance of GRAPHARM on six diverse graph generation benchmarks from different domains: (1) Community-small (You et al., 2018b), (2) Caveman (You, 2018), (3) Cora (Sen et al., 2008), (4) Breast (Gonzalez-Malerva et al., 2011), (5) Enzymes (Schomburg et al., 2004) and (6) Ego-small (Sen et al., 2008). For each dataset, we use 80% of the graphs as training set and the rest 20% as test sets. ... We use two molecular dataset, QM9 (Ramakrishnan et al., 2014) and ZINC250k (Irwin et al., 2012).
Dataset Splits Yes For each dataset, we use 80% of the graphs as training set and the rest 20% as test sets. Following (Liao et al., 2019), we randomly select 20% from the training data as the validation set.
Hardware Specification No The paper mentions 'funds/computing resources from Georgia Tech' but does not provide specific hardware details such as GPU/CPU models or memory.
Software Dependencies No The paper mentions using specific optimizers (ADAM) and network architectures (GAT, GNN) but does not provide specific version numbers for software dependencies or libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes Model optimization: We use ADAM with β1 = 0.9 and β = 0.999 as the optimizer. The learning rate is set for 10 4 and 5 10 4 for the denoising network and diffusion ordering network respectively on all the datasets. ... We set the number of trajectories M as 4 for all the datasets.