Forward Learning of Graph Neural Networks
Authors: Namyong Park, Xing Wang, Antoine Simoulin, Shuai Yang, Grey Yang, Ryan A. Rossi, Puja Trivedi, Nesreen K. Ahmed
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world datasets show the effectiveness and generality of the proposed forward graph learning framework. We release our code at https://github.com/facebookresearch/forwardgnn. |
| Researcher Affiliation | Collaboration | Namyong Park1, Xing Wang1, Antoine Simoulin1, Shuai Yang1, Grey Yang1, Ryan Rossi2, Puja Trivedi3, Nesreen Ahmed4 1Meta AI, 2Adobe Research, 3University of Michigan, 4Intel Labs |
| Pseudocode | Yes | Algorithm 1: Forward-Forward Learning of GNNs for Node Classification (Section 3.1) Input: GNN G, graph G, input features X, node labels y Output: trained GNN G... Algorithm 4: Single-Forward Learning of GNNs for Link Prediction (Section 3.4) Input: GNN G, graph G, input features X Output: trained GNN G |
| Open Source Code | Yes | We release our code at https://github.com/facebookresearch/forwardgnn. |
| Open Datasets | Yes | We use five real-world graphs drawn from three domains: PUBMED, CITESEER, and CORAML are citation networks; AMAZON is a co-purchase network; GITHUB is a followership graph. Table 2 in Appendix A provides the summary statistics of these graphs. Appendix A also presents a more detailed description of these datasets. All datasets used in this work are publicly accessible (Sec. 4.1). |
| Dataset Splits | Yes | We randomly generate the train-validation-test node splits with a ratio of 64%-16%-20%, and evaluate the performance in terms of classification accuracy. We split the edges randomly into train-validation-test sets, with a ratio of 64%-16%-20%, which form positive edge sets. |
| Hardware Specification | Yes | Experiments were performed on a Linux server running Cent OS 9, with an NVIDIA H100 GPU, AMD EPYC 9654 96-Core Processors, and 2.2TB RAM. |
| Software Dependencies | Yes | We implemented FORWARDGNN in Python 3.8, using PyTorch (Paszke et al., 2019) v1.13.1. We used PyTorch Geometric (Fey & Lenssen, 2019) v.2.2.0 for the implementations of GCN, SAGE, and GAT. |
| Experiment Setup | Yes | For all GNNs and across all graph learning tasks, we set the size of hidden units to 128. We used the Adam optimizer with a learning rate of 0.001, and a weight decay of 0.0005. For both tasks, we split the data (i.e., nodes and edges) randomly into train-validation-test sets, with a ratio of 64%-16%-20%. Also, we set the max training epochs to 1000, and applied validation-based early stopping with a patience of 100, for both tasks. For FF and its variants, we set the threshold parameter θ to 2.0, and for FF-Sym Ba, we set its α parameter to 4.0, following the settings in the original papers. We set the temperature parameter τ to 1.0 in Eq. (6). |