Maximizing Activity in Ising Networks via the TAP Approximation
Authors: Christopher Lynn, Daniel Lee
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Fig. 1, we compare various orders of the Plefka approximation across a range of networks for the norm p = 1. We experimentally evaluate the performance of our greedy algorithm under various orders of the Plefka expansion. We also provide the first comparison between Ising influence algorithms and the traditional greedy influence maximization algorithm in (Kempe, Kleinberg, and Tardos 2003). |
| Researcher Affiliation | Academia | Christopher W. Lynn Department of Physics & Astronomy University of Pennsylvania Philadelphia, PA 19104, USA Daniel D. Lee Department of Electrical & Systems Engineering University of Pennsylvania Philadelphia, PA 19104, USA |
| Pseudocode | Yes | Algorithm 1: Projected Gradient Ascent (PGA) Algorithm 2: Greedy algorithm for choosing top H influential nodes in an Ising network (GI) |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | No | The paper refers to network types like 'Erdos-Renyi', 'Preferential Attachment', and a 'collaboration network of physicists on the ar Xiv', but it does not provide concrete access information (link, DOI, specific citation with authors/year) for these datasets to indicate public availability. |
| Dataset Splits | No | The paper does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility of data partitioning. |
| Hardware Specification | No | The paper mentions the time taken for computations (e.g., '10 minutes', '5 seconds') and the size of networks, but it does not explicitly describe the specific hardware used (e.g., CPU, GPU models, memory) for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiments. |
| Experiment Setup | Yes | In practice, we find that γ 0.01 yields rapid convergence for most systems up to the third-order approximation. For each network, we assume that the interactions are symmetric J = JT with uniform weights and that the initial bias is zero b0 = 0. We then study the performance of the various algorithms across a range of interaction strengths, summarized by the spectral radius ρ(J). For each network, we ensure j Jij 1/2 and we average over many draws of the initial bias {b0 i } U[ 1/2, 1/2]. |