Adaptive Kernel Graph Neural Network
Authors: Mingxuan Ju, Shifu Hou, Yujie Fan, Jianan Zhao, Yanfang Ye, Liang Zhao7051-7058
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN by comparison with state-of-the-art GNNs. |
| Researcher Affiliation | Academia | 1 University of Notre Dame, Notre Dame, IN 46556 2 Case Western Reserve University, Cleveland, OH 44106 3 Emory University, Atlanta, GA 30322 |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is publicly available at: https://github.com/jumxglhf/AKGNN. |
| Open Datasets | Yes | The three datasets we evaluate are Cora, Citeseer and Pubmed (Sen et al. 2008). |
| Dataset Splits | Yes | (e.g., publicly fixed 20 nodes per class for training, 500 nodes for validation, and 1,000 nodes for testing). |
| Hardware Specification | Yes | All the experiments in this work are implemented on a single NVIDIA Ge Force RTX 2080 Ti with 11 GB memory size and we didn t encounter any memory bottleneck issue while running all experiments. |
| Software Dependencies | No | The paper mentions 'We utilize Py Torch as our deep learning framework to implement AKGNN' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The number of layers K is 5, the hidden size d(K) is 64, the dropout rate between propagation layers is 0.6, the learning rate is 0.01, the weight decay rate is 5e-4, and patience for early stopping is 100 iterations. |