Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling

Authors: Qitian Wu, Zixuan Zhang, Xiaofeng Gao, Junchi Yan, Guihai Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic and real-world datasets demonstrate that the proposed method could effectively detect the hidden network among markers and make decent prediction for future marked events, even when the number of markers scales to million level.
Researcher Affiliation Academia 1Shanghai Key Laboratory of Scalable Computing and Systems 2Department of Computer Science and Engineering, Shanghai Jiao Tong University 3Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University 4State Key Labrotary of Novel Software Technology, Nanjing University {echo740, zzx_gongshi117}@sjtu.edu.cn, gao-xf@cs.sjtu.edu.cn yanjunchi@sjtu.edu.cn, gchen@nju.edu.cn
Pseudocode Yes Algorithm 1: Efficient Random Walk based Sampling for Generation of Next Event Marker
Open Source Code Yes The codes are released at https://github.com/zhangzx-sjtu/LANTERN-Neur IPS-2019.
Open Datasets Yes We also use two real-world datasets in our experiment. Firstly, Meme Tracker dataset [11] contains hyperlinks between articles and records information flow from one site to another... Besides, we consider a large-scale dataset Weibo [26] which records the resharing of posts among 1, 787, 443 users with 413, 503, 687 following edges.
Dataset Splits No The paper describes generating synthetic datasets and using real-world datasets, but it does not specify explicit train/validation/test splits, percentages, or sample counts for these datasets.
Hardware Specification Yes The experiments are deployed on Nvidia Tesla K80 GPUs with 12G memory and we statistic the running time to discuss the model scalability.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like TensorFlow or PyTorch).
Experiment Setup No The paper states, 'The implementation details for baselines and hyper-parameter settings are presented in Appendix C.' However, the specific details of these settings are not provided within the main body of the analyzed text.