Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization

Authors: Sam Hopkins, Jerry Li, Fred Zhang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we provide a meta-problem and a equivalence theorem that lead to a new unified view on robust and heavytailed mean estimation in high dimensions. We show that the meta-problem can be solved either by a variant of the FILTER algorithm from the recent literature on robust estimation or by the quantum entropy scoring scheme (QUE), due to Dong, Hopkins and Li (Neur IPS 19). By leveraging our equivalence theorem, these results translate into simple and efficient algorithms for both robust and heavy-tailed settings. Furthermore, the QUE-based procedure has run-time that matches the fastest known algorithms on both fronts. Our analysis of FILTER is through the classic regret bound of the multiplicative weights update method.
Researcher Affiliation Collaboration UC Berkeley. Email: {hopkins, z0}@berkeley.edu Microsoft Research AI. Email: jerrl@microsoft.com
Pseudocode Yes Algorithm 1: Multiplicative weights for spectral sample reweighing (Definition 2.1)
Open Source Code No The paper does not provide any statement or link indicating that open-source code for the described methodology is available.
Open Datasets No This is a theoretical paper that does not conduct empirical experiments or use specific datasets for training.
Dataset Splits No This is a theoretical paper that does not conduct empirical experiments or specify dataset splits for validation.
Hardware Specification No This is a theoretical paper that does not describe any computational experiments or hardware specifications.
Software Dependencies No This is a theoretical paper that does not describe any computational experiments or software dependencies with version numbers.
Experiment Setup No This is a theoretical paper focusing on algorithms and analysis; it does not include details about an experimental setup, hyperparameters, or training settings.