Dirichlet Graph Variational Autoencoder
Authors: Jia Li, Jianwei Yu, Jiajin Li, Honglei Zhang, Kangfei Zhao, Yu Rong, Hong Cheng, Junzhou Huang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on graph generation and graph clustering, we demonstrate the effectiveness of our proposed framework. 6 Experiments |
| Researcher Affiliation | Collaboration | 1 The Chinese University of Hong Kong 2 Tencent AI Lab 3 Georgia Institute of Technology {lijia,jwyu,jjli,kfzhao,hcheng}@se.cuhk.edu.hk, zhanghonglei@gatech.edu yu.rong@hotmail.com, jzhuang@uta.edu |
| Pseudocode | No | The paper provides mathematical formulations and descriptions of processes but does not include formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about open-source code availability or a link to a code repository. |
| Open Datasets | Yes | Data and baselines We follow Graphite [8] and create data sets from six graph families with fixed, known generative processes, to evaluate the performance of DGVAE on graph generation. Data and baselines We use three benchmark data sets, i.e., Pubmed, Citeseer [24] and Wiki [34]. |
| Dataset Splits | No | The paper mentions evaluating on a 'test set', but it does not provide explicit details for training, validation, or test dataset splits (e.g., percentages or counts). |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'minibatch based Adam optimizer' but does not specify versions for any programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | For DGVAE/DGAE, we use the same network architecture through all the experiments. We train the model using minibatch based Adam optimizer. We train for 200 iterations with a learning rate of 0.01. The output dimension of the first hidden layer is 32 and that of the second-layer (K) is 16. The Dirichlet prior is set to be 0.01 for all dimensions if not specified otherwise. For Heatts, we let s = 1 for all experiments. |