Enhancing Multi-Scale Diffusion Prediction via Sequential Hypergraphs and Adversarial Learning
Authors: Pengfei Jiao, Hongqian Chen, Qing Bao, Wang Zhang, Huaming Wu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four datasets demonstrate that our model significantly outperforms state-of-the-art methods. In this section, we conduct experiments on both microscopic and macroscopic cascade predictions to demonstrate the effectiveness of our proposed model. |
| Researcher Affiliation | Academia | 1School of Cyberspace, Hangzhou Dianzi University, China 2College of Intelligence and Computing, Tianjin University, China 3Center for Applied Mathematics, Tianjin University, China 4Data Security Governance Zhejiang Engineering Research Center, Hangzhou Dianzi University, China |
| Pseudocode | No | The paper describes computational processes and model components (e.g., HGNN, LSTM updates) in mathematical notation and prose, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on four datasets, i.e., Christianity, Android, Douban and Memetracker. The statistics of these datasets are shown in Table 1. A detailed description of the datasets can be found in the Appendix. Christianity (Sankar et al. 2020) consists of the user friendship network and cascading interactions related to Christian themes on Stack-Exchanges. Android (Sankar et al. 2020) is collected from Stack Exchanges, which is a community Q&A website. ... Douban (Zhong et al. 2012) is a Chinese social website... Memetracker (Leskovec, Backstrom, and Kleinberg 2009) collects a million news stories and blog posts from online websites... |
| Dataset Splits | Yes | For each dataset, we employ a random sampling method to allocate 80% of cascades for training, 10% for validation, and the remaining 10% for testing. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments (e.g., GPU models, CPU types, memory specifications). |
| Software Dependencies | No | For MINDS, we implement the model using Py Torch and utilize the Adam optimizer with a learning rate of 0.001. |
| Experiment Setup | Yes | The embedding dimension is set to 64, and the batch size is 32. The balance parameter λ is assigned a value of 0.3, while the hyperparameter γ is set to 0.05. Social homophily learning utilizes a 2-layer GCN, and global interaction learning is facilitated through a single-layer HGNN. Additionally, the number of time intervals is set to 8. |