Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks

Authors: Zirou Qiu, Abhijin Adiga, Madhav Marathe, S. S. Ravi, Daniel Rosenkrantz, Richard Stearns, Anil Kumar Vullikanti

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experimental studies on the relationships between model parameters and the empirical performance of our PAC algorithm. Here, we study the performance of the algorithm on a variety of different networks (Magnani et al., 2013; Omodei et al., 2015; Stark et al., 2006; Coleman et al., 1957), as shown in Table 1.
Researcher Affiliation Academia 1University of Virginia, Charlottesville, VA, USA 2Biocomplexity Institute and Initiative, University of Virginia, Charlottesville, VA, USA 3Department of Computer Science, University at Albany SUNY, Albany, NY, USA.
Pseudocode No The algorithm is described in natural language within Section 3.1 'An Efficient PAC Learner', but no formal pseudocode block, algorithm box, or structured code-like steps are provided.
Open Source Code Yes Our source code (in C++ and Python), documentation, and selected datasets are available at https://github.com/bridgelessqiu/ Learning-Multilayer-Dynamical-Systems-ICML24.
Open Datasets Yes We study the performance of the algorithm on a variety of different networks (Magnani et al., 2013; Omodei et al., 2015; Stark et al., 2006; Coleman et al., 1957), as shown in Table 1.
Dataset Splits No The paper mentions training sets and evaluating on sampled configurations from a distribution D, but it does not specify explicit training/validation/test dataset splits (e.g., percentages or counts) or refer to standard pre-defined splits for reproducibility.
Hardware Specification Yes All experiments were performed on Intel Xeon(R) Linux machines with 64GB of RAM.
Software Dependencies No The paper mentions that the source code is in 'C++ and Python' but does not list specific software dependencies with version numbers (e.g., library versions, compiler versions, or specific solver versions).
Experiment Setup Yes For each network, we have a target system h where the threshold of each vertex v V on each layer i is in [0, degi(v) + 2]. For each such h , a training set T = {(Ci, h (Ci))}q i=1 is constructed, where each Ci is sampled from a distribution D. We consider distributions where the state of each vertex in Ci T is 0 w.p. p and 1 w.p. 1 p, for a p {0.1, 0.5, 0.9}. [...] Next, we study the relationship between ℓand σ, under a fixed |T | = 500 over different distributions. [...] Lastly, we study the effect of k on the loss ℓusing multilayer Gnp networks of size 500 and average degree (on each layer) of 10. The number of layers is increased from 2 to 6 while |T | is fixed at 500. The result is shown in Fig 3(b) for three values of σ.