Uplift Modeling from Separate Labels
Authors: Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show a mean squared error bound for the proposed estimator and demonstrate its effectiveness through experiments. In this section, we test the proposed method and compare it with baselines. |
| Researcher Affiliation | Academia | 1 The University of Tokyo, CHIBA, JAPAN 2 RIKEN Center for Advanced Intelligence Project (AIP), TOKYO, JAPAN 3 LAMSADE, CNRS, Université Paris-Dauphine, Université PSL, PARIS, FRANCE |
| Pseudocode | No | The paper describes the proposed method mathematically but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing the code for the described methodology or links to code repositories. |
| Open Datasets | Yes | Email data: This data set consists of data collected in an email advertisement campaign for promoting customers to visit a website of a store [8, 27], available at https://blog.minethatdata.com/2008/03/minethatdata-e-mail-analytics-and-data.html. Jobs data: This data set consists of randomized experimental data obtained from a job training program called the National Supported Work Demonstration [17], available at http://users.nber.org/~rdehejia/data/nswdata2.html. Criteo data: This data set consists of banner advertisement log data collected by Criteo [18] available at http://www.cs.cornell.edu/~adith/Criteo/. |
| Dataset Splits | Yes | We use 4 5000 and 2000 randomly sub-sampled data points for training and evaluation, respectively. (Email data) ...We use 4 50 randomly sub-sampled data points for training and 100 for evaluation. (Jobs data) |
| Hardware Specification | No | The paper mentions 'neural networks are fully connected ones with two hidden layers each with 10 hidden units' but provides no specific details about the hardware (e.g., CPU, GPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for reproducibility. |
| Experiment Setup | Yes | The neural networks are fully connected ones with two hidden layers each with 10 hidden units. For the proposed method, we use the linear-in-parameter models with Gaussian basis functions centered at randomly sub-sampled training data points (see Appendix K for more details). We conduct 50 trials of each experiment with different random seeds. |