Learning Mixtures of Graphs from Epidemic Cascades

Authors: Jessica Hoffmann, Soumya Basu, Surbhi Goel, Constantine Caramanis

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our results on synthetic data. We first draw random graphs from a distribution (specified below), and each sample is a simulation of a cascade spreading on it. Once the graphs are drawn, the experiments are run 10 times. The shaded area represents the 25th to 75th percentiles. Our first two experiments are on Erdös-Renyi G(N, p) graphs. In Figure 5a, we investigate the maximum error on learned edges and compare it with the average error.
Researcher Affiliation Academia 1University of Texas at Austin.
Pseudocode Yes Algorithm 1 Learn the weights of undirected edges Input: Vertex set V Output: Edge weights for the two epidemics graphs E LEARNEDGES(V ) S, W LEARN2NODES(V, E) while S = V do Select u S, v V \S such that (u, v) E if deg(v) 3 then W W LEARNSTAR(v, E, W) end if if deg(v) = 2 then Set w S such that (u, w) E Set t V such that (v, t) E and t = u if t S then W W LEARNLINE(t, v, u, w, S, W) end if end if S S {v} end while Return W
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets No We validate our results on synthetic data. We first draw random graphs from a distribution (specified below), and each sample is a simulation of a cascade spreading on it.
Dataset Splits No The paper does not specify any training, validation, or test dataset splits. It mentions using 'synthetic data' and running 'experiments... 10 times' but no detailed splits.
Hardware Specification No The paper does not mention any specific hardware used for running experiments (e.g., CPU, GPU models, or cloud computing instances).
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed for replication.
Experiment Setup Yes Our first two experiments are on Erdös-Renyi G(N, p) graphs. In Figure 5a, we investigate the maximum error on learned edges and compare it with the average error. We find that the "Max error" curve follows the dependence predicted by our theory, of ϵ = O 1/ p N log(N). Finally, by varying the degree on random d-regular graphs with a fixed number of vertices (Figure 5c), we see that the sample complexity is multiplied by 2 as the number of edges grows from Θ(N) to Θ(N 2), as predicted since the dependence in the degree is logarithmic.