Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks
Authors: Eli Meirom, Haggai Maron, Shie Mannor, Gal Chechik
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our approach on two very different problems, Influence Maximization and Epidemic Test Prioritization, and show that our approach outperforms state-of-the-art methods, often significantly. [...] 5. Experiments We evaluated our approach in two tasks: (1) Epidemic test prioritization, and (2) Dynamic influence maximization. |
| Researcher Affiliation | Industry | Eli A. Meirom 1 Haggai Maron 1 Shie Mannor 1 Gal Chechik 1 1NVIDIA Research, Israel. Correspondence to: Eli Meirom <emeirom@nvidia.com>. |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block was found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Real-World Datasets. We tested our algorithm and baselines on graphs of different sizes and sources, ranging from 5K to over 100K nodes. (1) CA-Gr Qc A A research collaboration network (Rossi & Ahmed, 2015). (2) Montreal, based on Wi Fi hotspot tracing(Hoen et al., 2015). (3) Portland: a compartment-based synthetic network (Wells et al., 2013; Eubank et al., 2004). (4) Email: An email network (Leskovec et al., 2007) (5) GEMSEC-RO: (Rozemberczki et al., 2019), friendship relations in the Deezer music service. |
| Dataset Splits | No | The paper states 'Algorithms were trained on randomly generated PA networks with 1000 nodes,' and evaluates on various real-world and synthetic datasets, but does not explicitly provide specific training/validation/test split percentages or sample counts for these datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like Proximal Policy Optimization (PPO) and Graph Neural Networks (GNNs), and implicitly PyTorch through a citation, but does not provide specific version numbers for these software dependencies (e.g., 'PyTorch 1.9' or 'CUDA 11.1'). |
| Experiment Setup | No | The paper mentions some aspects of the training procedure like 'Each experiment was performed with at least three random seeds' and the sampling mechanism including a parameter 'ϵ', but it does not provide comprehensive experimental setup details such as learning rate, batch size, number of epochs, or optimizer settings. |