Amortized Network Intervention to Steer the Excitatory Point Processes

Authors: Zitao Song, Wendi Ren, Shuang Li

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We assess the effectiveness of our approach, Amortized Network Intervention (ANI), in managing networked temporal dynamics through simulated and real-world experiments. Our results demonstrate that ANI successfully reduces the mutual influence effects in both synthetic and two real datasets. We measure this improvement by calculating reduced intensities.
Researcher Affiliation Academia Zitao Song, Wendi Ren & Shuang Li The Chinese University of Hong Kong, Shenzhen {zitaosong,wendi ren}@link.cuhk.edu.cn, lishuang@cuhk.edu.cn
Pseudocode Yes Algorithm 1: ANI (Meta-Training Phase)
Open Source Code No The paper does not include an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We used data released publicly by (NYTimes, 2020) on daily COVID-19 to learn the excitatory point processes of the pandemic outbreak. The data contains the cumulative counts of coronavirus cases in the United States, at the state and county level, over time. Specifically, we separated the U.S. COVID19 data into state-wise records and further split a state-wise record into different county corpus where each split is named as a local region or a split , containing distinct intensity trajectories from no more than 25 counties.
Dataset Splits Yes Concretely, we trained an amortized policy from five different county corpus and tested the amortized interventions on multiple unseen county dynamics. To generalize to an unseen split, the agent needs to be invariant to the orders of different counties and the amplitude or the phase of the spikes of the underlying excitatory point processes.
Hardware Specification Yes We trained the dynamic, policy, and PEM-value network by using the ADAM optimizer with a 1E-2 decay rate across 4 RTX3090 GPUs.
Software Dependencies No The paper mentions software like ADAM optimizer, Neural ODEs, and SUMO, but it does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes For the dynamic model, we parameterized the ODE forward function fh in Eq. (3) as a Timedependent multilayer perception (MLP) with dimensions [64-64-64]. We used the Softplus activation function. ... The initial learning rates for dynamic learning, policy learning, and PEM-value function learning were set to be 1E-3, 1E-4, and 1E-4 respectively.