Variational Graph Recurrent Neural Networks
Authors: Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, Xiaoning Qian
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments with multiple real-world dynamic graph datasets demonstrate that SI-VGRNN and VGRNN consistently outperform the existing baseline and state-of-the-art methods by a significant margin in dynamic link prediction. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering, Texas A&M University {ehsanr, armanihm, duffieldng, krn, xqian}@tamu.edu Mc Combs School of Business, The University of Texas at Austin mingyuan.zhou@mccombs.utexas.edu |
| Pseudocode | No | The paper does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | We implemented (SI-)VGRNN in Py Torch [18] and the implementation of our proposed models is accessible at https://github.com/VGraph RNN/VGRNN. |
| Open Datasets | No | The paper mentions using 'six real-world dynamic graphs as described in Table 1' and 'More detailed descriptions of the datasets can be found in the supplement.', but it does not provide concrete access information such as a direct URL, DOI, repository, or a formal citation with author(s) and year for the public datasets used. |
| Dataset Splits | Yes | For dynamic link detection problem, we randomly remove 5% and 10% of all edges at each time for validation and test sets, respectively. |
| Hardware Specification | No | The paper states 'We also thank Texas A&M High Performance Research Computing and Texas Advanced Computing Center for providing computational resources to perform experiments in this work.' This acknowledges computational resources but does not provide specific hardware details such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper states 'We implemented (SI-)VGRNN in Py Torch [18]'. While PyTorch is mentioned, a specific version number for PyTorch or any other software library is not provided. |
| Experiment Setup | Yes | For all datasets, we set up our VGRNN model to have a single recurrent hidden layer with 32 GRU units. All ϕ s in equations (3), (4), and (6) are modeled by a 32-dimensional fully-connected layer... In all experiments, we train the models for 1500 epochs with the learning rate 0.01. We use the validation set for the early stopping. |