Recurrent Temporal Revision Graph Networks

Authors: Yizhou Chen, Anxiang Zeng, Qingtao Yu, Kerui Zhang, Cao Yuanpeng, Kangle Wu, Guangda Huzhang, Han Yu, Zhiming Zhou

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed RTRGN with the temporal link prediction task on 8 real-world datasets (7 public datasets as well as a private Ecommerce dataset that will be released along with this submission), and compare it against related baselines...
Researcher Affiliation Collaboration Yizhou Chen Shopee Pte Ltd., Anxiang Zeng SCSE, Nanyang Technological University, Singapore, Guangda Huzhang Shopee Pte Ltd., Qingtao Yu Shopee Pte Ltd., Kerui Zhang Shopee Pte Ltd., Yuanpeng Cao Shopee Pte Ltd., Kangle Wu Shopee Pte Ltd., Han Yu SCSE, Nanyang Technological University, Singapore, Zhiming Zhou Shanghai University of Finance and Economics, China
Pseudocode Yes We provide the pseudocode for RTRGN embedding calculation in Appendix A.
Open Source Code No The paper states: 'An Ecommerce dataset is provided along with this submission', but does not explicitly mention the provision of open-source code for the proposed methodology.
Open Datasets Yes We evaluate the proposed RTRGN with the temporal link prediction task on 8 real-world datasets (7 public datasets as well as a private Ecommerce dataset that will be released along with this submission)... Model Movie Lens Wikipedia Reddit Social E.-1m Social E. UCI-FSN Ubuntu Ecommerce
Dataset Splits Yes For datasets, following [28], we split the events chronologically into training (70%), validation (15%), and testing (15%) sets. The same applies to inductive settings, where for both training and validation sets, 10% of nodes (excluding the target nodes) are randomly masked out.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions) used for implementation or experimentation.
Experiment Setup Yes We used Adam optimizer with a learning rate of 0.0001, a batch size of 200 (unless specified differently by other baselines for their optimal performance). We train models for 50 epochs.