A/B Testing in Dense Large-Scale Networks: Design and Inference
Authors: Preetam Nandy, Kinjal Basu, Shaunak Chatterjee, Ye Tu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments Simulation Study: We compare OASIS with a graph-cluster randomization method... Real World Experiments: We demonstrate an application of our method on the Linked In newsfeed... |
| Researcher Affiliation | Industry | Preetam Nandy, Kinjal Basu, Shaunak Chatterjee, Ye Tu Linked In Corporation Mountain View, CA 94083 { pnandy, kbasu, shchatte, ytu } @linkedin.com |
| Pseudocode | Yes | Algorithm 1 Optimal Allocation Strategy (OAS)... Algorithm 2 OASIS |
| Open Source Code | No | The paper does not provide any statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper uses data from the 'Linked In newsfeed' for real-world experiments, which is an internal company dataset, and for simulations, it generates graphs using models [7, 12] without providing access to specific generated datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce data partitioning for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'the Operator Splitting method' and references the OSQP solver [24], but it does not specify concrete version numbers for this or any other software dependencies used in the experiments. |
| Experiment Setup | No | The paper describes the design of the experiment and the optimization formulation but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed system-level training settings for its models. |