Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Improved and Oracle-Efficient Online $\ell_1$-Multicalibration

Authors: Rohan Ghuge, Vidya Muthukumar, Sahil Singla

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study online multicalibration, a framework for ensuring calibrated predictions across multiple groups in adversarial settings, across T rounds. ... Our key insight is a novel reduction of online ℓ1-multicalibration to an online learning problem with product-based rewards, which we refer to as online linear-product optimization (OLPO). To obtain the improved rate of e O(T 1/3), we introduce a linearization of OLPO and design a no-regret algorithm for this linearized problem. ... Our framework also extends to certain infinite families of groups (e.g., all linear functions on the context space) by exploiting a 1-Lipschitz property of the ℓ1-multicalibration error with respect to H. ... The paper focuses on theoretical bounds, algorithm design (e.g., no-regret algorithms), proofs, and mathematical reductions without presenting empirical evaluations on specific datasets.
Researcher Affiliation Academia Rohan Ghuge 1 Vidya Muthukumar 2 Sahil Singla 3 1H. Milton Stewart School of Industrial and Systems Engineering / Algorithms and Randomness Center, Georgia Institute of Technology, Atlanta, USA. 2School of Electrical and Computer Engineering/H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, USA. 3School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA.. Correspondence to: Rohan Ghuge <EMAIL>.
Pseudocode Yes Algorithm 1 ONLINE ℓ1-MULTICALIBRATION Algorithm 2 LINEARIZED ONLINE LINEAR-PRODUCT OPTIMIZATION Algorithm 3 Generalized FTPL for OLPO Algorithm 4 O(x, h, θ)
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories or supplementary materials containing code.
Open Datasets No The paper is theoretical, focusing on algorithm design and mathematical bounds. It does not perform experiments using specific datasets, and therefore no information about open datasets is provided.
Dataset Splits No The paper is theoretical and does not conduct experiments on datasets. Therefore, there is no mention of dataset splits for training, validation, or testing.
Hardware Specification No The paper focuses on theoretical contributions and algorithm design, not empirical evaluations. Consequently, there is no description of hardware used for running experiments.
Software Dependencies No The paper describes algorithms (e.g., online gradient descent, multiplicative weights update) but does not specify any software dependencies with version numbers that would be required to implement or reproduce the algorithms.
Experiment Setup No The paper is purely theoretical, focusing on algorithm design and proving theoretical bounds. It does not describe any experimental setups, hyperparameters, or training configurations for empirical evaluations.