Counterfactual Prediction for Bundle Treatment
Authors: Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct extensive experiments on both synthetic datasets and real world datasets to demonstrate the advantages of our proposed variational sample re-weighting algorithm. |
| Researcher Affiliation | Collaboration | 1Tsinghua University, 2Alibaba Group |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | Considering that few datasets contain ground truth of different treatment outcomes, we conduct experiments on both synthetic datasets and datasets from a simulator mimicking recommendation systems in real world to evaluate the effectiveness of our method. and There is a simulation environment about document recommendation 1 in Recsim. 1https://github.com/google-research/recsim/blob/master/recsim/environments/interest_exploration.py |
| Dataset Splits | No | The paper mentions creating an 'unbiased testing dataset' by shuffling matches of confounders and treatments but does not specify a distinct validation set or explicit train/validation/test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments, only mentioning general 'deep neural networks'. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python, PyTorch versions), needed to replicate the experiment. |
| Experiment Setup | Yes | In this experiment, we set the confounder dimension d = 10, latent dimension k = 3, the number of one-value bits in treatments s = 5, and the noise variable εy N(0, 0.012). and We fixed the sample size n = 10000, the number of document topics d = 4 and selected documents s = 4. |