Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning
Authors: Weilin Chen, Ruichu Cai, Zeqin Yang, Jie Qiao, Yuguang Yan, Zijian Li, Zhifeng Hao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on two real-world networks with semisynthetic data demonstrate the effectiveness of our proposed estimators. |
| Researcher Affiliation | Academia | 1School of Computer Science, Guangdong University of Technology, Guangzhou, China 2Pazhou Laboratory (Huangpu), Guangzhou, China 3Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE 4College of Science, Shantou University, Shantou, China. |
| Pseudocode | No | The paper describes the model architecture and procedures in text and diagrams (Figure 2), but it does not include formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Weilin Chen507/targeted_ interference and https://github.com/DMIRLAB-Group/TNet. |
| Open Datasets | Yes | Blog Catalog (BC) is an online community where users post blogs. In this dataset, each unit is a blogger and each edge is the social link between units. The features are bag-of-words representations of keywords in bloggers descriptions. Flickr is an online social network where users can share images and videos. In this dataset, each unit is a user and each edge is the social relationship between units. The features are the list of tags of units interests. We reuse the data generation by Jiang & Sun (2022). As for the original datasets, the potential outcome is simulated by yi(ti, zi) = ti + zi + poi + 0.5 po Ni + ei, Original datasets are available at https://github.com/songjiang0909/ Causal-Inference-on-Networked-Data. |
| Dataset Splits | No | The paper mentions evaluating |
| Hardware Specification | Yes | All the experiments can be run on a single 11GB GPU of Ge Force RTX 2080 Ti. |
| Software Dependencies | No | The paper mentions using "PyTorch framework" and "Adam optimizer." However, it does not provide specific version numbers for these software components, which is necessary for reproducible software dependency information. |
| Experiment Setup | Yes | We use 1 graph convolution as our encoder, and all MLP in TNet is 3 fully connected layers with 64 hidden units in each layer. Dropouts are used with a given probability of 0.05 during training. We use full-batch training and use Adam optimizer (Kingma & Ba, 2014) with the learning rate across {0.001, 0.0001} for L1 + L2 and learning rate across {0.01, 0.001, 0.0001} for L3. The space of parameter α and γ is {0.5, 1}, and β = 20 n 1/2 . In estimators of ϵ, we use two B-spline estimators with degree 2 and the same number of knots across {4, 5, 10, 20} (all equally spaced at [0, 1]). |