How Framelets Enhance Graph Neural Networks
Authors: Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yuguang Wang, Pietro Lió, Ming Li, Guido Montufar
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we show a variety of numerical tests for our framelet convolution and pooling. |
| Researcher Affiliation | Academia | 1The University of Sydney Business School, The University of Sydney, Camperdown, NSW 2006, Australia. 2Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany. 3Institute of Natural Sciences and School of Mathematical Sciences, Shanghai Jiao Tong University, China. 4School of Mathematics and Statistics, The University of New South Wales, Sydney, Australia. 5Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom. 6Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China. 7Department of Mathematics and Department of Statistics, University of California, Los Angeles, United States. |
| Pseudocode | No | The paper describes computational flows and algorithms in paragraph text and figures (e.g., Figure 1), but does not contain any structured pseudocode or algorithm blocks labeled as "Algorithm" or "Pseudocode". |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, nor does it contain an explicit code release statement or repository link. |
| Open Datasets | Yes | The first experiment of node classification tasks is conducted on Cora, Citeseer and Pubmed, which are three benchmark citation networks. Moreover, we employ ogbn-arxiv from open graph benchmark OGB (Hu et al., 2020) to illustrate the power of our framelet convolution on large-scale graph-structured data. |
| Dataset Splits | Yes | Each dataset is split into training, validation and test sets by 80%, 10% and 10%. The training stops when the validation loss stops improving for 20 consecutive epochs or reaching maximum 200 epochs. |
| Hardware Specification | Yes | All experiments run in Py Torch on NVIDIA R Tesla V100 GPU with 5,120 CUDA cores and 16GB HBM2 mounted on an HPC cluster. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the software used but does not provide any specific version numbers for it or any other key software dependencies required for replication. |
| Experiment Setup | Yes | Most hyperparameters are set to default, except for learning rate, weight decay, hidden units and dropout ratio in training. A grid search is conducted for fine tuning on these hyperparameters from the search space detailed in Appendix. Both methods are trained with the ADAM optimizer. The maximum number of epochs is 200 for citation networks and 500 for ogbn-arxiv. |