Diffusion Source Identification on Networks with Statistical Confidence

Authors: Quinlan E Dawkins, Tianxi Li, Haifeng Xu

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our approach via extensive synthetic experiments on well-known random network models, a large data set of hundreds of real-world networks, as well as a mobility network between cities concerning the COVID-19 spreading.
Researcher Affiliation Academia 1Department of Computer Science, University of Virginia, Charlottesville, Virginia, USA 2Department of Statistics, University of Virginia, Charlottesville, Virginia, USA.
Pseudocode Yes Algorithm 1 Vanilla MC for Confidence Set Construction
Open Source Code Yes All source code of this paper can be found in hyperlink https://github.com/labsigma/Diffusion-Source-Identification.
Open Datasets Yes We generate networks from three random network models: random 4-regular trees, the preferential attachment model (Barab asi & Albert, 1999) and the small-world (S-W) network model (Watts & Strogatz, 1998).
Dataset Splits No The paper evaluates the coverage rate of its confidence sets and uses Monte Carlo simulations but does not provide specific details on train/validation/test dataset splits needed for model reproduction in a machine learning context.
Hardware Specification No The paper mentions running experiments 'on 20 cores' but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names, frameworks with versions) are mentioned.
Experiment Setup Yes The Monte Carlo size m is 10000.