Alternately Optimized Graph Neural Networks

Authors: Haoyu Han, Xiaorui Liu, Haitao Mao, Mohamadali Torkamani, Feng Shi, Victor Lee, Jiliang Tang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency. In this section, we verify the effectiveness of the proposed ALT-OPT by comprehensive experiments.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, Michigan State University, East Lansing, US 2Department of Computer Science, North Carolina State University, Raleigh, US 3Amazon, US (this work does not relate to the author s position at Amazon) 4Tiger Graph, US.
Pseudocode Yes Algorithm 1 Algorithm of ALT-OPT
Open Source Code Yes implementation details 1 https://github.com/haoyuhan1/ALT-OPT/
Open Datasets Yes For the transductive semi-supervised node classification task, we choose nine commonly used datasets including three citation datasets, i.e., Cora, Citeseer and Pubmed (Sen et al., 2008), two coauthors datasets, i.e., CS and Physics, two Amazon datasets, i.e., Computers and Photo (Shchur et al., 2018), and two OGB datasets, i.e., ogbn-arxiv and ogbn-products (Hu et al., 2020).
Dataset Splits Yes For label rates 5, 10, 20, and 60, we use 500 nodes for validation and 1000 nodes for test. For label rates 30% and 60%, we use half of the rest nodes for validation and the remaining half for test.
Hardware Specification Yes All the experiments are conducted on the same machine with a NVIDIA RTX A6000 GPU (48 GB memory).
Software Dependencies No The paper mentions 'PyTorch-Geometric' and 'The Adam optimizer' but does not specify their version numbers or other software dependencies with versions.
Experiment Setup Yes For all methods, the following hyperparameters are tuned based on the loss and validation accuracy from the following search space: Learning Rate: {0.01, 0.05} Dropout Rate: {0, 0.5, 0.8} Weight Decay: { 5e-4, 5e-5, 0} Hyperparamters between 0 and 1: step size 0.1. For ALT-OPT, the λ1 and λ2 are tuned from {0.1, 0.3, 0.5, 0.7, 1} and {1, 3, 5, 7, 10}, respectively; 10 propagation layers; pretraining steps s = 100; τ = 0.1; pseudo label numbers per class m are choose from {100, 200, 500, 5000} based on the size of graphs; The training epochs e is set to 1,000 for ogbn-products dataset and 500 for all other datasets same as other models.