Graph Deformer Network
Authors: Wenting Zhao, Yuan Fang, Zhen Cui, Tong Zhang, Jian Yang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on widely-used datasets validate the effectiveness of GDN in graph and node classifications. |
| Researcher Affiliation | Academia | Wenting Zhao , Yuan Fang , Zhen Cui , Tong Zhang and Jian Yang Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology {wtingzhao, fangyuan, zhen.cui, tong.zhang, csjyang}@njust.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The proofs of the above two propositions can be found in the supplementary material1. 1https://github.com/wtzhao1631/gdn |
| Open Datasets | Yes | For node classification, three citation graphs are used: Cora, Citeseer, and Pubmed. We adopt the data preprocessed in the work [Yang et al., 2016]... For graph classification, we adopt eight datasets [Jiang et al., 2019] to assess our GDN method: MUTAG, PTC, NCI1, PROTEINS, ENZYMES, IMDB-BINARY, IMDB-MULTI, and REDDIT-MULTI-12K. |
| Dataset Splits | Yes | We randomly divide the dataset with the proportion of 9:1, where 9 folds are as the training set and the remaining 1 fold is as the testing set. The accuracies are reported in terms of mean standard deviation of 10-fold cross-validation. ... We adopt the data preprocessed in the work [Yang et al., 2016], and follow its data partitioning rules. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | Graph classification. ... Momentum optimizer to train the network for 500 epochs, where its batch size, initial learning rate, decay rate and momentum are set to 128, 0.05, 0.95 and 0.9, respectively. The dropout rate is set to 0.5, and the Re LU unit is leveraged as a nonlinear activation function. Node classification. ... Adam optimizer to train the model for 500 epochs with an initial learning rate of 0.05, decay rate of 0.95. The dropout rate is set to 0.5 and the Re LU unit is nonlinear activation function. ... Also, the number of anchor nodes is set to 16 and the scale of the neighborhood is set to 2. |