CKGConv: General Graph Convolution with Continuous Kernels

Authors: Liheng Ma, Soumyasundar Pal, Yitian Zhang, Jiaming Zhou, Yingxue Zhang, Mark Coates

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that CKGConv-based Networks outperform existing graph convolutional networks and perform comparably to the best graph transformers across a variety of graph datasets.
Researcher Affiliation Collaboration 1Department of ECE, Mc Gill University, Montreal, Canada 2Mila Quebec AI Institute, Montreal, Canada 3ILLS International Laboratory on Learning Systems, Montreal, Canada 4Huawei Noah s Ark Lab, Montreal, Canada
Pseudocode No The paper includes mathematical equations and architectural diagrams, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes The code and models are publicly available at https: //github.com/networkslab/CKGConv.
Open Datasets Yes We evaluate our proposed method on five datasets from Benchmarking GNNs (Dwivedi et al., 2022a) and another two datasets from Long-Range Graph Benchmark (Dwivedi etwed et al., 2022c). These benchmarks include diverse nodeand graph-level learning tasks such as node classification, graph classification, and graph regression. The statistics of these datasets and further details of the experimental setup are deferred to Appendix C." and "Table 6. Overview of the graph learning datasets involved in this work (Dwivedi et al., 2022a;c; Irwin et al., 2012).
Dataset Splits Yes We conduct the experiments on the standard train/validation/test splits of the evaluated benchmarks, following previous works (Ramp aˇsek et al., 2022; Ma et al., 2023).
Hardware Specification Yes The timing is conducted on a single NVIDIA V100 GPU (Cuda 11.8) and 20 threads of Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz.
Software Dependencies No The paper mentions "Cuda 11.8" but does not provide specific version numbers for other key software components like Python or deep learning frameworks (e.g., PyTorch, TensorFlow).
Experiment Setup Yes The final hyperparameters are presented in Table 7 and Table 8.