On Online Experimentation without Device Identifiers

Authors: Shiv Shankar, Ritwik Sinha, Madalina Fiterau

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our estimator is superior to standard estimators, with a lower bias and greater robustness to network uncertainty. 5. Experiments Before we describe the experiments and their results, we mention a few key research questions, and how our experiments are considered to answer each one of them.
Researcher Affiliation Collaboration 1College of Information and Computer Sciences, University of Massachusetts, USA 2Adobe Research, USA. Correspondence to: Shiv Shankar <sshankar@umass.edu>.
Pseudocode Yes Algorithm 1 Parametric Bootstrap
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets Yes We experiment with Erd os-R enyi graphs to compare the performance of our estimator with other estimators. Next, we conduct experiments with a framework designed for the Air Bn B vacation rentals domain [50]. We work with a public dataset on power generation facilities in USA used in Papadogeorgou et al. [59].
Dataset Splits No The paper mentions simulating data and conducting experiments but does not provide specific details on training, validation, or test splits (e.g., percentages or sample counts) for any of its datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It mentions using "neural networks" but no hardware specifications.
Software Dependencies No The paper mentions using "neural networks" and "MLPs" but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or other libraries).
Experiment Setup No The paper describes some architectural choices like "MLPs with one hidden layer and leaky ReLU activation" and using "Gaussian variational approximation with both mean and variance parameterized" but lacks specific numerical hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configurations.