Implicit Graph Neural Networks: A Monotone Operator Viewpoint
Authors: Justin Baker, Qingsong Wang, Cory D Hauck, Bao Wang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify the computational efficiency and accuracy of the new models over existing IGNNs on various graph learning tasks at both graph and node levels. (Abstract) In this section, we compare the performance of MIGNN-Mon and MIGNN-NK with IGNN and several other popular GNNs on various graph classification tasks at both node and graph levels. (Section 5) |
| Researcher Affiliation | Collaboration | 1Department of Mathematics and Scientific Computing and Imaging Insitute, University of Utah. 2Oak Ridge National Laboratory. |
| Pseudocode | Yes | Appendix F.1. Pseudocode for MIGNN with operator splitting schemes (Heading for Algorithm 1 and Algorithm 2). Appendix F.3. Anderson acceleration (Heading for Algorithm 5, 6, 7, 8). |
| Open Source Code | Yes | Code is available at https://github.com/ Utah-Math-Data-Science/MIGNN (Abstract) |
| Open Datasets | Yes | Amazon co-purchasing dataset, which contains 334863 nodes, 925872 edges, and the diameter of the graph is 44 [63]; we provide details of the Amazon co-purchasing dataset in Appendix I. (Section 5.2) Cora, Citeseer, and Pubmed; each dataset s statis-tics of node/edge/average shortest path between nodes are 2485/5069/5.27, 2120/3679/9.31, 19717/44324/6.34, respectively. (Section 5.3) MUTAG, PTC, COX2, PROTEINS, and NCI1 [62], and some details of these datasets are provided in Appendix I. (Section 5.4) |
| Dataset Splits | Yes | The data is partitioned into training, validation, and test sets of 5%, 10%, and 85%, respectively. (Section 5.1) We use the ten fixed data splits from Pei et al. [45], and use 10-fold cross validation to evaluate the model performance. (Appendix J) |
| Hardware Specification | No | No specific hardware (e.g., GPU model, CPU type, memory details) used for running the experiments is mentioned. |
| Software Dependencies | No | The training procedure uses the Adam optimizer to minimize the BCEwith Logit Loss provided by the Pytorch library. (Appendix J) The hyperparameters were selected using the Bayesian search feature of Weights&Bias [11]... (Appendix J) No specific version numbers for PyTorch or other critical libraries are provided. |
| Experiment Setup | Yes | The training procedure details and hyperparameters used in each task are provided in Appendix J. The hyperparameters were selected using the Bayesian search feature of Weights&Bias [11] over a limited range of inputs. The hyperparameters considered are detailed in Table 5. (Section 5 and Appendix J) Table 5 lists hyperparameter options including learning rate, weight decay, dropout, hidden features, lambda max, alpha, and fp tol. |