GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
Authors: Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, Jure Leskovec
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that Graph RNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 larger than previous deep models. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Stanford University, Stanford, CA, 94305 2Department of Computer Science, University of Southern California, Los Angeles, CA, 90007. |
| Pseudocode | Yes | Algorithm 1 Graph RNN inference algorithm |
| Open Source Code | Yes | 1The code is available in https://github.com/ snap-stanford/Graph RNN |
| Open Datasets | Yes | Community. 500 two-community graphs...generated by the Erd os-R enyi model (E-R) (Erd os & R enyi, 1959)...Protein. 918 protein graphs (Dobson & Doig, 2003)...Ego. 757 3-hop ego networks extracted from the Citeseer network (Sen et al., 2008) |
| Dataset Splits | No | We use 80% of the graphs in each dataset for training and test on the rest. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementing the model but does not specify any software dependencies with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow versions). |
| Experiment Setup | No | The hyperparameter settings for Graph RNN were fixed after development tests on data that was not used in follow-up evaluations (further details in the Appendix). |