Fully Exploiting Cascade Graphs for Real-time Forwarding Prediction
Authors: Xiangyun Tang, Dongliang Liao, Weijie Huang, Jin Xu, Liehuang Zhu, Meng Shen582-590
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using two real world datasets, we demonstrate the significant superiority of the proposed method compared with the state-of-the-art. Our experiments also reveal interesting implications hidden in the performance differences between cascade graph embedding and time-series modeling. |
| Researcher Affiliation | Collaboration | Xiangyun Tang1, Dongliang Liao2*, Weijie Huang2, Jin Xu2 , Liehuang Zhu1 , Meng Shen13 1School of Cyberspace Security, Beijing Institute of Technology, China 2Data Quality Team, We Chat, Tencent Inc., China 3Cyberspace Security Research Center, Peng Cheng Laboratory, China xiangyunt@bit.edu.cn, {brightliao, wainhuang, jinxxu}@tencent.com, {liehuangz, shenmeng}@bit.edu.cn |
| Pseudocode | Yes | Algorithm 1 Path Sampling Strategy |
| Open Source Code | Yes | 2https://github.com/tangguotxy/Temp Cas |
| Open Datasets | Yes | Weibo Dataset1. Weibo dataset is collected from a popular Chinese microblog platform (Cao et al. 2017). 1https://github.com/Cao Qi92/Deep Hawkes. Multimedia Content Dataset2. We collect a multimedia content dataset from a widely used mobile social application. 2https://github.com/tangguotxy/Temp Cas |
| Dataset Splits | Yes | We randomly take 80% of data for training, 10% for validation and 10% for evaluation. |
| Hardware Specification | No | The paper does not specify any hardware details such as CPU, GPU models, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions using Xavier initialization and Adam optimizer but does not specify software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | In the cascade graph embedding part of Temp Cas, the max path length L is fixed as 8 for multimedia contents and 4 for Weibo posts, and the max path number k for each graph is set as 100 for multimedia contents and 50 for Weibo posts. The hidden size of the hierarchical attention network and the LSTM layer is set as 64 and 128 respectively. In time-series modeling, the kernel size of CNN is set as 5 and the hidden size of CNN and LSTM is set as 128. The window size of attention CNN is 12 for multimedia contents and 5 for Weibo posts. We adopt 2 dense layers for the final output, where the hidden dimensions are 128 and 1 respectively. At last, we leverage the Xavier initialization and Adam optimizer for parameter learning. L2 factor is set as 10 5. |