Continuous-Time Graph Learning for Cascade Popularity Prediction

Authors: Xiaodong Lu, Shuo Ji, Le Yu, Leilei Sun, Bowen Du, Tongyu Zhu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world datasets demonstrate the superiority and rationality of our approach.
Researcher Affiliation Academia Xiaodong Lu , Shuo Ji , Le Yu , Leilei Sun , Bowen Du , Tongyu Zhu SKLSDE Lab, Beihang University, Beijing 100191, China {xiaodonglu, jishuo, yule, leileisun, dubowen, zhutongyu}@buaa.edu.cn
Pseudocode No The paper describes methods and processes but does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code can be found at https://github.com/ lxd99/CTCP.
Open Datasets Yes APS 1 contains papers published on American Physical Society (APS) journals and their citation relationships before 2017. Footnote 1: https://journals.aps.org/datasets
Dataset Splits Yes Following Xu et al.[2021], we randomly select 70%, 15% and 15% of the cascades for training, validating and testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'GRU', and 'LSTM' but does not specify their version numbers or the versions of any underlying libraries/frameworks used for implementation.
Experiment Setup Yes We set the dimension of dynamic states of users and cascades, as well as the cascade embedding to 64. The dimension of position embedding is set to 16. The time slot number nt is set to 20 and the fusion weight λ is 0.1. For training, we adopt the Adam optimizer and use the early stopping strategy with a patience of 15. The learning rate and batch size are set to 0.0001 and 50.