Graph Representation Learning via Ladder Gamma Variational Autoencoders
Authors: Arindam Sarkar, Nikhil Mehta, Piyush Rai5604-5611
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report both quantitative and qualitative results on several benchmark datasets and compare our model with several state-of-the-art methods. We evaluate our model on several synthetic and real datasets and compare with various state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | Arindam Sarkar,1 Nikhil Mehta,2 Piyush Rai3 1Amazon India 2Duke University 3IIT Kanpur |
| Pseudocode | No | The paper describes the generative process and inference network using equations and textual descriptions, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | Additional results are included in a longer version of the paper available in ar Xiv 1. This is a link to arXiv, not a code repository, and there is no explicit statement about code availability. |
| Open Datasets | Yes | We next evaluate our model and the baselines for link-prediction on 4 real-world benchmark graph datasets NIPS12 (2037 nodes) 2, Cora (2361 nodes), Citeseer (3312 nodes), and Pubmed (19717 nodes) (Kipf and Welling 2016a). For NIPS12, footnote 2 refers to http://www.cs.nyu.edu/roweis/data.html. |
| Dataset Splits | Yes | Unless explicitly stated, for link prediction and community discovery tasks, we use 85% adjacency matrix used for training, 5% for validation and 10% for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using components like GCN but does not provide specific version numbers for any software, libraries, or frameworks used in the implementation or experimentation. |
| Experiment Setup | Yes | For our model, we set the gamma shape hyperparameter of top-layer as 10^-5 and for subsequent layers, shape parameter is drawn as per 1. The gamma rate parameter was set as 10^-3 for top layer, and 10^-2 for subsequent layers... We used two layers in both encoder and decoder network with layers sizes (bottom and top) being 128 and 64 for Cora, Citeseer and Pubmed, and 64 and 32 for NIPS12. All datasets were trained for 500 epochs. |