Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network

Authors: Tianze Luo, Zhanfeng Mo, Sinno Jialin Pan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our MM-FGCN achieves SOTA performance on various graph learning tasks. The code is available on Git Hub1. ... We conduct experiments on both assortative and disassortative graph datasets. ... Table 1: Test accuracy (in percentage)... Table 2: Performance comparison for graph property prediction... Table 3: Ablation study on the meta-framelet learner and the meta-learning algorithm.
Researcher Affiliation Collaboration Tianze Luo 1 Zhanfeng Mo 1 Sinno Jialin Pan1,2 1 Nanyang Technological University, Singapore; 2 The Chinese University of Hong Kong {tianze001, zhanfeng001}@ntu.edu.sg, sinnopan@cuhk.edu.hk. ... This research is supported, in part, by Alibaba Group through the Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore.
Pseudocode Yes Algorithm 1 MM-FGConv ... Algorithm 2 Meta-training MM-FGCN
Open Source Code Yes The code is available on Git Hub1. 1https://github.com/ltz0120/graph-multiresolution-meta-framelet
Open Datasets Yes We conduct experiments on both assortative and disassortative graph datasets. A dataset is called assortative if its neighboring nodes usually have similar labels and features (Ma et al., 2022), as observed in citation networks and community networks. ... including Cora, Citeseer, and Pubmed (Sen et al., 2008), as well as disassortative datasets, including Cornell (Craven et al., 1998), Texas (Craven et al., 1998), Wisconsin (Craven et al., 1998), Chameleon (Rozemberczki et al., 2021), and Squirrel (Rozemberczki et al., 2021). ... We assess the efficacy of MM-FGCN on 6 benchmark graph classification and regression datasets, including D&D (Dobson & Doig, 2003), PROTEINS (Dobson & Doig, 2003), NCI1 (Wale et al., 2008), Mutagenicity (Kazius et al., 2005), Ogbg-molhiv (Hu et al., 2020), and QM7 (Blum & Reymond, 2009). All the datasets contain more than 1,000 graphs with varying graph structures... Table 4: Statistics of the node-classification datasets used in our experiments. ... Table 5: Summary of the datasets for the graph property prediction tasks.
Dataset Splits Yes For assortative datasets, following the configuration in (Kipf & Welling, 2016), we allocate 20 nodes per class for training, 1,000 nodes for testing, and 500 for validation. As for disassortative datasets, we divide each dataset into training, validation, and test sets using a split ratio of 60%:20%:20%. ... In our experiment, we take 80% of the training data as the Smain and 20% as Smeta.
Hardware Specification Yes The experiments are conducted on a single 40G A100 GPU.
Software Dependencies No The paper states 'We implement our model using Py Torch' but does not specify version numbers for PyTorch or any other software libraries, environments, or solvers used in the experiments.
Experiment Setup Yes We set the default number of filters as four, which is suitable for most of the datasets. The default Chebyshev approximation order is set to 6. The dimension of hidden variables is searched from {16, 32, 64}, and the level of filters are selected from {2, 3, 4, 5}. Other hyperparameters are set at: 0.001 for the learning rate, 0.001 for weight decay, 0.5 for dropout, and 2 for the number of MM-FGConv layers. ... For assortative datasets, following the configuration in (Kipf & Welling, 2016), we allocate 20 nodes per class for training, 1,000 nodes for testing, and 500 for validation. As for disassortative datasets, we divide each dataset into training, validation, and test sets using a split ratio of 60%:20%:20%.