Continuous-Time Graph Representation with Sequential Survival Process

Authors: Abdulkadir Çelikkanat, Nikolaos Nakis, Morten Mørup

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on diverse real-world datasets to evaluate GRAS2P. The results showcase its effectiveness in capturing intricate characteristics of networks by explicitly accounting for intermittent edge persistence, and outperforming baseline methods in downstream tasks. and In this section, we will examine the performance of the proposed model over a diverse range of networks varying in size and characteristics. But before delving into the experimental evaluations, we will first present details regarding the experimental setup, considered datasets, and baseline approaches.
Researcher Affiliation Academia Abdulkadir C elikkanat, Nikolaos Nakis, Morten Mørup Department of Applied Mathematics and Computer Science, Technical University of Denmark abce@dtu.dk, nnak@dtu.dk, mmor@dtu.dk
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Implementation. The source codes and other details can be found at https://abdcelikkanat.github.io/projects/grassp
Open Datasets Yes In the experiments, we have used several real datasets of diverse types, including a social network (Facebook) (Viswanath et al. 2009), collaboration graph (Neur IPS), and three contact networks (G enois et al. 2015; Isella et al. 2011).
Dataset Splits Yes Furthermore, we randomly choose 20% node pairs among all possible dyads in the initial first part, and they are divided into two equal-sized groups to design the validation and testing sets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments were found. The paper only mentions 'D=2' for the latent space dimension.
Software Dependencies No No specific software versions (e.g., Python version, library versions like PyTorch 1.9) were found. The paper mentions using the 'Adam optimizer (Kingma and Ba 2017)' but without version details.
Experiment Setup Yes The position vectors (x) are initialized uniformly within the [ 1, 1] range at random. The bias terms (β) and velocities (v) are sampled from the standard normal distribution. The prior parameters, (σB, σN) are set to 1B/B and 1N/N at the beginning. We follow a sequential learning strategy for training the model, i.e., we optimize different sets of parameters in stages. Firstly, we optimize the velocities (v) for 100 epochs. Then, we include the initial positions (x) into the optimization procedure, and we continue to train the model by optimizing these two parameters (x, v) together for another 100 epochs. Finally, we incorporate the bias and prior parameters and optimize all model hyper-parameters together. In total, we use 300 epochs for the whole learning procedure, and the Adam optimizer (Kingma and Ba 2017) is employed with an initial learning rate of 0.1. In the experiments, we set the number of bins (B) to 100 to ensure sufficient capacity for tracking nodes in the latent space (D = 2).