Continuous-time edge modelling using non-parametric point processes

Authors: Xuhui Fan, Bin Li, Feng Zhou, Scott SIsson

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superior performance of the proposed CTEM models is demonstrated through extensive experimental evaluations on four real-world continuous-time edge data sets.
Researcher Affiliation Academia Xuhui Fan1, Bin Li2, Feng Zhou3, and Scott A. Sisson1 1UNSW Data Science Hub, and School of Mathematics & Statistics, University of New South Wales 2Shanghai Key Lab of IIP, School of Computer Science, Fudan University 3Department of Computer Science & Technology, Tsinghua University
Pseudocode No The paper describes its generative processes and algorithms in paragraph form and numbered lists, but does not include formal pseudocode blocks or figures explicitly labeled as 'Algorithm' or 'Pseudocode'.
Open Source Code Yes We provide the code and data in the Supplementary Material.
Open Datasets Yes We perform experiments on four continuous-time edge datasets (Table 1) collected from the Stanford Large Social Network Dataset [16]. [16] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
Dataset Splits No The edges of each dataset are sorted according to their occurrence times, and the dataset split into a training set (the first 70% of edges) and a testing set (the remaining 30%). No explicit validation split is mentioned.
Hardware Specification No The paper mentions running time 'on a laptop' but does not specify any particular hardware components like CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions using the ADAM algorithm for optimization but does not specify version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We use the Gaussian kernel functions, κ(f) k (ti, tj) = θ(f) k e 0.5(ti tj)2/(δ(f) k )2 and κ(g) k (ti, tj) = θ(g) k e 0.5(ti tj)2/(δ(g) k )2, in the Gaussian processes for generating random functions {fk(t)}K k=1 and {gk(t)}K k=1 respectively, where θ(f) k , θ(g) k are scaling parameters and δ(f) k , δ(g) k are bandwidth parameters. These parameters are optimized by applying the ADAM algorithm [15] on the Evidence Lower Bound of the variational distributions. The parameters av, bv of the endogenous function scaling values vk and the parameters aπ, bπ of the latent feature πik are given a prior distribution of Gam(0.1, 0.1) and optimized through the variational inference. We usually set 400 iterations for the MF-VI algorithm. Without particular specifications, we set the number of latent features as K = 5.