Time-Aware Random Walk Diffusion to Improve Dynamic Graph Learning

Authors: Jong-whi Lee, Jinhong Jung

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Throughout extensive experiments, we demonstrate that TIARA effectively augments a given dynamic graph, and leads to significant improvements in dynamic GNN models for various graph datasets and tasks.
Researcher Affiliation Academia Jong-whi Lee and Jinhong Jung* Department of Computer Science and Artificial Intelligence, Jeonbuk National University, South Korea
Pseudocode Yes Algorithm 1: TIARA at time t
Open Source Code Yes The code of TIARA and the datasets are publicly available at https://github.com/dev-jwel/Tia Ra.
Open Datasets Yes Table 1 summarizes 7 public datasets used in this work. Bitcoin Alpha is a social network between bitcoin users (Kumar et al. 2016, 2018b). Wiki Elec is a voting network for Wikipedia adminship elections (Leskovec, Huttenlocher, and Kleinberg 2010). Reddit Body is a hyperlink network of connections between two subreddits (Kumar et al. 2018a). For node classification, we use the following datasets evaluated in (Xu et al. 2019). Brain is a network of brain tissues where edges indicate their connectivities. DBLP-3 and DBLP-5 are co-authorship networks extracted from DBLP. Reddit is a post network where two posts were connected if they contain similar keywords. The code of TIARA and the datasets are publicly available at https://github.com/dev-jwel/Tia Ra.
Dataset Splits Yes For each dataset, we tune the hyperparameters of all models on the original graph (marked as NONE) and augmented graphs separately through a combination of grid and random search on a validation set, and report test accuracy at the best validation epoch. ... As a standard setting (Pareja et al. 2020), we follow a chronological split with ratios of training (70%), validation (10%), and test (20%) sets.
Hardware Specification Yes All experiments were done at workstations with Intel Xeon 4215R and RTX 3090.
Software Dependencies No We use Py Torch and DGL (Wang et al. 2019) to implement all methods. Specific version numbers for PyTorch or DGL are not provided.
Experiment Setup Yes For TIARA, we fix K to 100, search for ϵ in [0.0001, 0.01], and tune α and β in (0, 1) s.t. 0 < α + β < 1. We use the Adam optimizer with weight decay 10 4, and the learning rate is tuned in [0.01, 0.05] with decay factor 0.999. The dropout ratio is searched in [0, 0.5].