Higher-Order Causal Message Passing for Experimentation with Complex Interference
Authors: Mohsen Bayati, Yuwei Luo, William Overman, Mohamad Sadegh Shirani Faradonbeh, Ruoxuan Xiong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we use synthetic experiments under simulated and real-world network interference patterns, to compare the performance of FO-CMP and HO-CMP estimators, outlined in Table 1 and Algorithm 1, with several benchmarks. Extensive simulations across multiple domains, using synthetic and real network data, demonstrate the efficacy of our approach in estimating total treatment effect dynamics, even in cases where interference exhibits non-monotonic behavior in the probability of treatment. |
| Researcher Affiliation | Academia | Mohsen Bayati1 Yuwei Luo1 William Overman1 Sadegh Shirani1 Ruoxuan Xiong2 1 Stanford Graduate School of Business 2 Emory University {bayati, yuweiluo, wpo, sshirani}@stanford.edu, ruoxuan.xiong@emory.edu |
| Pseudocode | Yes | Algorithm 1: Higer-Order Causal Message Passing (HO-CMP) |
| Open Source Code | Yes | We will provide open access to the data and code. |
| Open Datasets | Yes | We consider two networks (graphs). The first graph is a simulated random geometric graph model, studied by Leung [2022]. The second graph is a social network of Twitch users [Rozemberczki and Sarkar, 2021]. |
| Dataset Splits | No | The paper uses synthetic experiments where data is generated for each simulation run, rather than using fixed datasets with predefined training, validation, and test splits. Therefore, explicit validation dataset split information is not applicable in the traditional sense. |
| Hardware Specification | Yes | All experiments were conducted on a Mac Book Air with an Apple M1 chip and 16 GB of memory, with each setting taking about 15 minutes for 100 iterations. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We primarily focus on the staggered rollout design with L distinct treated probabilities, denoted by π(1), , π(L)... We use two values of T = 40, 200 and set L = 4, with (π(1), π(2), π(3), π(4)) = (0.1, 0.2, 0.4, 0.5). Table 1: Two examples of feature functions Algorithms Feature functions {ϕk(ˆνt(w), ˆρt(w)2, w)}k [K] fθ( ) FO-CMP {ˆνt(w), wt+1, ˆνt(w) wt} linear regression HO-CMP ˆνt(w), wt+1, ˆνt(w) wt, ˆρt(w)2, w2 t+1 linear regression |