Structure-Aware Random Fourier Kernel for Graphs
Authors: Jinyuan Fang, Qiang Zhang, Zaiqiao Meng, Shangsong Liang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on five real-world datasets show that our model can achieve state-of-the-art performance in two typical graph learning tasks, i.e., object classification and link prediction. |
| Researcher Affiliation | Collaboration | 1 School of Computer Science and Engineering, Sun Yat-sen University, China 2 Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China 3 Hangzhou Innovation Center, Zhejiang University, China 4 College of Computer Science and Technology, Zhejiang University, China 5 AZFT Knowledge Engine Lab, China 6 School of Computing Science, University of Glasgow, United Kingdom 7 Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates |
| Pseudocode | Yes | The pseudo codes can be found in Appendix C. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | To evaluate GPSRF in graph learning tasks, we conduct experiments on three benchmark citation networks [65], including Cora, Citeseer and Pubmed, and two Amazon product co-purchase networks [66], including Photo and Computers. |
| Dataset Splits | Yes | For the co-purchase networks, we randomly select 20% of nodes as training nodes, 10% of nodes as validation nodes and the remaining nodes are treated as test nodes. |
| Hardware Specification | No | The paper mentions computational complexity and implies the use of GPUs (common for such tasks), but does not specify any exact hardware models (e.g., CPU, GPU, memory, or cloud instance types) used for the experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer [69]' but does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries used. |
| Experiment Setup | Yes | Throughout the experiments, to enable mini-batch training, we set the transformation function g as two-layer Graph SAGE network with mean aggregator, where both the aggregation function in Eq. (2) and the combination function in Eq. (3) are mean functions. As for the RF kernel function kθ, we set the base distribution p(ϵ) as the standard Gaussian and use the Gaussian conditional pθ(ω|ϵ) = N(ω; µθ(ϵ), σθ(ϵ)), whose mean and variance are parameterized with neural networks. The parameters of our model are learned with the Adam optimizer [69] through alternating steps of an inference procedure and a learning procedure. |