Efficient Graph Generation with Graph Recurrent Attention Networks
Authors: Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K. Duvenaud, Raquel Urtasun, Richard Zemel
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we empirically verify the effectiveness of our model on both synthetic and real graph datasets with drastically varying sizes and characteristics. |
| Researcher Affiliation | Collaboration | Renjie Liao1,2,3, Yujia Li4, Yang Song5, Shenlong Wang1,2,3, William L. Hamilton6,7, David Duvenaud1,3, Raquel Urtasun1,2,3, Richard Zemel1,3,8 University of Toronto1, Uber ATG Toronto2, Vector Institute3, Deep Mind4, Stanford University5, Mc Gill University6, Mila Quebec Artiļ¬cial Intelligence Institute7, Canadian Institute for Advanced Research8 |
| Pseudocode | No | The paper describes the model generation process and architecture using textual descriptions and mathematical equations, but does not include a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is released at: https://github.com/lrjconan/GRAN. |
| Open Datasets | Yes | (1) Grid: We generate 100 standard 2D grid graphs with 100 |V | 400. (2) Protein: This dataset contains 918 protein graphs [7] with 100 |V | 500. (3) Point Cloud: First MM-DB is a dataset of 41 simulated 3D point clouds of household objects [26] with an average graph size of over 1k nodes, and maximum graph size over 5k nodes. |
| Dataset Splits | Yes | We use the same protocol as [37] and create random 80% and 20% splits of the graphs in each dataset for training and testing. 20% of the training data in each split is used as the validation set. |
| Hardware Specification | Yes | To measure the run time for each setting we used a single GTX 1080Ti GPU. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not specify any software names with version numbers (e.g., PyTorch version, TensorFlow version, or specific library versions). |
| Experiment Setup | Yes | For our GRAN, hidden dimensions are set to 128, 512 and 256 on three datasets respectively. Block size and stride are both set to 1. The number of Bernoulli mixtures is 20 for all experiments. We stack 7 layers of GNNs and unroll each layer for 1 step. All of our models are trained with Adam optimizer [15] and constant learning rate 1e-4. |