Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks
Authors: Qiyu Kang, Kai Zhao, Yang Song, Sijie Wang, Wee Peng Tay
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrate that our approach adapts better to different types of graph datasets than popular state-of-the-art graph node embedding GNNs. |
| Researcher Affiliation | Collaboration | 1School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 2C3 AI, Singapore. |
| Pseudocode | Yes | Algorithm 1 Graph Node Embedding with Ham GNN |
| Open Source Code | Yes | The code is available at https://github.com/zknus/ Hamiltonian-GNN. |
| Open Datasets | Yes | We select datasets with various geometries including the three citation networks: Cora (Mc Callum et al., 2004), Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012); and two low hyperbolicity datasets (Chami et al., 2019): Disease and Airport (cf. Table 5). and In this section, to underscore our model s capacity for handling large graph datasets, we conduct a series of experiments on the Ogbn datasets obtained from https: //ogb.stanford.edu/docs/nodeprop/, in compliance with the experimental setup detailed in (Hu et al., 2021). |
| Dataset Splits | Yes | We use a 60%, 20%, 20% random split for training, validation, and test sets on this new dataset. |
| Hardware Specification | No | The paper discusses computational cost and time in Table 11, but does not provide specific hardware details such as CPU/GPU models, memory, or cluster specifications used for the experiments. |
| Software Dependencies | No | We employ the ODE solver (Chen, 2018) in the implementation of Ham GNN. For computation efficiency and performance effectiveness, the fixed-step explicit Euler solver (Chen et al., 2018a) is used in Ham GNN. |
| Experiment Setup | Yes | We use the ADAM optimizer (Kingma & Ba, 2014) with the weight decay as 0.001. We set the learning rate as 0.01 for citation networks and 0.001 for Disease and Airport datasets. The results presented in Table 1 are under the 3 layers Ham GNN setting. Ham GNN first compresses the dimension of input features to the fixed hidden dimension (e.g. 64) through a fully connected (FC) layer. |