An active learning framework for multi-group mean estimation

Authors: Abdellah Aznag, Rachel Cummings, Adam N. Elmachtoub

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our findings by numerical experiments presented in Section 6, which show that our theoretical regret bounds match empirical convergence rates in both cases of finite and infinite p-norms. We also provide examples showing that for finite p-norms, the smallest variance affects the regret, even when the feedback is Gaussian. This is in contrast to the case of the infinite norm, where it is proven [13] that under Gaussian feedback, the algorithm is not affected by the smallest variance.
Researcher Affiliation Academia Abdellah Aznag Rachel Cummings Adam N. Elmachtoub Department of Industrial Engineering and Operational Research Columbia University {aa4683, rac2239, ae2516} @columbia.edu
Pseudocode Yes Algorithm 1 Variance-UCB (p, T, G, c1, c2)
Open Source Code No The paper does not provide an explicit statement about open-sourcing its code or a link to a code repository for the methodology described.
Open Datasets No The paper uses synthetic data generated from Gaussian distributions for its numerical studies: "In all the experiments Dg follow Gaussian distributions." It does not refer to publicly available, named datasets.
Dataset Splits No The paper describes generating data from Gaussian distributions but does not specify training, validation, or test splits for this generated data. It directly evaluates the algorithm on these simulated scenarios.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory, cloud instances) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers used for running its experiments (e.g., Python, PyTorch, or specific solvers).
Experiment Setup Yes Except where they are varied, the default parameter settings are T = 105, p = 2, G = 2, with the respective data distributions of groups 1 and 2 as N(1, 1) and N(2, 2.5), satisfying c1 = c2 = 5.