Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective
Authors: Rongzhe Wei, Haoteng YIN, Junteng Jia, Austin R. Benson, Pan Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluation on both synthetic and real datasets demonstrates our theory. Specifically, the node mis-classification errors of three citation networks with different levels of attributed information (Gaussian attributes) are shown in Fig. 1, which precisely matches the above conclusions. |
| Researcher Affiliation | Academia | Rongzhe Wei1, Haoteng Yin1, Junteng Jia2, Austin R. Benson2, Pan Li1 1 Department of Computer Science, Purdue University 2 Department of Computer Science, Cornell University |
| Pseudocode | No | The paper does not contain any section explicitly labeled “Pseudocode” or “Algorithm”, nor are there structured code-like blocks. |
| Open Source Code | Yes | Our code is available at https://github. com/Graph-COM/Bayesian_inference_based_GNN.git. |
| Open Datasets | Yes | This experiments compare non-linear models and linear models under Gaussian and Laplacian attributes on three benchmark citation networks Pub Med, Cora, and Cite Seer [92]. |
| Dataset Splits | No | One graph is used for training and the other one for testing. |
| Hardware Specification | No | The provided text does not explicitly describe the hardware used for running the experiments. It only mentions a reference to Appendix G for compute resources, which is not available. |
| Software Dependencies | No | The provided text does not explicitly list any software dependencies with specific version numbers. |
| Experiment Setup | Yes | The model is trained with Adam optimizer (learning rate = 1e-2, weight decay = 5e-4). |