Deconvolutional Networks on Graph Data
Authors: Jia Li, Jiajin Li, Yang Liu, Jianwei Yu, Yueting Li, Hong Cheng
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness of the proposed GDN with two tasks: graph feature imputation [35, 44] and graph structure generation [19, 14]. For the former task, we further propose a graph autoencoder (GAE) framework that resembles the symmetric fashion of architectures [29]. The proposed GAE outperforms the state-of-the-arts on six benchmarks. For the latter task, our proposed GDN can enhance the generation performance of popular variational autoencoder frameworks including VGAE [19] and Graphite [14]. |
| Researcher Affiliation | Academia | 1 Hong Kong University of Science and Technology 2 The Chinese University of Hong Kong jialee@ust.hk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks clearly labeled as such. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | Datasets We use six benchmark datasets including several domains: citation networks (Cora, Citeseer) [32], product co-purchase networks (Amaphoto, Amacomp ) [26], social rating networks (Douban, Ciao2). For citation networks and product co-purchase networks, we use the preprocessed versions provided by [35] with uniform randomly missing rate of 10%. For Douban, we use the preprocessed dataset provided by [27]. For Ciao, we use a sub-matrix of 7,317 users and 1,000 items. Dataset statistics are summarized in Table 3 in the Appendix. |
| Dataset Splits | No | For MUTAG and PTC-MR, we use 50% samples as train set and the remaining 50% as test set. For ZINC, we use the default train-test split. The paper mentions using "validation loss" for early stopping in Appendix D, but does not explicitly provide the validation dataset split percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We train for 200 iterations with a learning rate of 0.01. We train all models for 200 epochs using Adam optimizer and the early stopping technique with patience 20 on validation loss. We use learning rate 0.001 and weight decay 0.0005. |