Learning Diffusions under Uncertainty
Authors: Hao Huang, Qian Yan, Keqi Han, Ting Gan, Jiawei Jiang, Quanqing Xu, Chuanhui Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies are conducted on both synthetic and real-world networks, and the results verify the effectiveness and efficiency of our approach. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, Wuhan University, China 2Ocean Base, Ant Group, China |
| Pseudocode | No | The paper describes the steps of the 'alternating maximization method' in text, but it does not present a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | The source code of PIND and the data used in the experiments are available at https://github.com/DiffusionNetworkInference/PIND. |
| Open Datasets | Yes | We adopt LFR benchmark graphs (Lancichinetti, Fortunato, and Radicchi 2008) as the synthetic diffusion networks. In addition, we also adopt two commonly used real-world microblogging networks (Wang et al. 2014), namely, (1) DUNF, which contains 750 users and 2974 following relationships, and (2) DPU, which contains 1038 users and 11385 following relationships. |
| Dataset Splits | No | The paper does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages or sample counts). It mentions simulating diffusion processes for data generation, but not data partitioning for model training and evaluation in a typical train/validation/test split. |
| Hardware Specification | Yes | All algorithms in the experiments are implemented in Python, running on a Mac Book Pro with Intel Core i5-1038NG7 CPU at 2.00GHz and 16GB RAM. |
| Software Dependencies | No | The paper states 'All algorithms in the experiments are implemented in Python,' but does not provide specific version numbers for Python or any other libraries or software dependencies. |
| Experiment Setup | Yes | In our PIND algorithm, the number r of sampling rounds for x is set to 100, and the stop condition for the iterative updates of x and α is that the variations of each xji x and each αji α are less than 0.01. In each diffusion process, each infected node tries to infect its uninfected child nodes with a certain probability, which subjects to a Gaussian distribution with a mean of 0.3 and a standard deviation of 0.05, to make about 95% of infection propagation probabilities within a range from 0.2 to 0.4. |