Faster Local Solvers for Graph Diffusion Equations

Authors: Jiahe Bai, Baojian Zhou, Deqing Yang, Yanghua Xiao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on 18 graphs ranging from small-scale (cora) to large-scale (papers100M), mainly collected from Stanford SNAP [45] and OGB [42] (see details in Table 3). We focus on the following tasks: 1) approximating diffusion vectors f with fixed stop conditions using both CPU and GPU implementations; 2) approximating dynamic PPR using local methods and training dynamic GNN models based on Instant GNN models [75].
Researcher Affiliation Academia Jiahe Bai 1 Baojian Zhou 1,2 Deqing Yang 1,2 Yanghua Xiao 2 1 the School of Data Science, Fudan University, 2 Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University jhbai20,bjzhou,yangdeqing,shawyh@fudan.edu.cn
Pseudocode Yes Algo 1 APPR(G, ϵ, α, s) [2] adopted from [8] 1: x 0, r αes 2: while u, ru ϵαdu do 3: r, x PUSH(u, r, x) 4: Return x 1: PUSH(u, r, x): 2: ru ru 3: xu xu + ru 4: ru 0 5: for v N(u) do 6: rv rv + (1 α) ru du 7: Return (r, x)
Open Source Code Yes Our code is publicly available at https://github.com/Jiahe Bai/Faster-Local-Solver-for-GDEs.
Open Datasets Yes We conduct experiments on 18 graphs ranging from small-scale (cora) to large-scale (papers100M), mainly collected from Stanford SNAP [45] and OGB [42] (see details in Table 3).
Dataset Splits No We randomly sample 50 nodes uniformly from lower to higher-degree nodes for all experiments. All results are averaged over these 50 nodes.
Hardware Specification Yes We conduct experiments using Python 3.10 with Cu Py and Numba on a server with 80 cores, 256GB of memory, and two 28GB NVIDIA-4090 GPUs.
Software Dependencies Yes We conduct experiments using Python 3.10 with Cu Py and Numba on a server with 80 cores, 256GB of memory, and two 28GB NVIDIA-4090 GPUs.
Experiment Setup Yes We set (αPPR = 0.1, ϵ = 1/n) for PPR and (αKatz = 1/( A 2 + 1), ϵ = 1/m) for Katz. We use a temperature of τ = 10 and ϵ = 1/ n for HK. All algorithms for HK estimate the number of iterations by considering the truncated error of Taylor approximation. Table 2 presents the speedup ratio of four local methods over their standard counterparts.