Oracle Efficient Algorithms for Groupwise Regret

Authors: Krishna Acharya, Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, Juba Ziani

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Beyond providing theoretical regret bounds, we evaluate this algorithm with an extensive set of experiments on synthetic data and on two real data sets Medical costs and the Adult income dataset, both instantiated with intersecting groups defined in terms of race, sex, and other demographic characteristics. We find that uniformly across groups, our algorithm gives substantial error improvements compared to running a standard online linear regression algorithm with no groupwise regret guarantees.
Researcher Affiliation Academia Krishna Acharya1, Eshwar Ram Arunachaleswaran2, Sampath Kannan2, Aaron Roth2, Juba Ziani1 1Georgia Institute of Technology, 2University of Pennsylvania
Pseudocode Yes Algorithm 1: Algorithm for Subsequence Regret Minimization
Open Source Code Yes The code is available at https://github.com/krishnacharya/multidecomp.
Open Datasets Yes Beyond providing theoretical regret bounds, we evaluate this algorithm with an extensive set of experiments on synthetic data and on two real data sets Medical costs and the Adult income dataset, both instantiated with intersecting groups defined in terms of race, sex, and other demographic characteristics. ... We provide additional experiments on the census-based Adult income dataset 6 (Ding et al., 2021; Flood et al., 2020) in Appendix D.5. ... The Medical Cost dataset Lantz (2013) looks at an individual medical cost prediction task.
Dataset Splits No The paper does not provide explicit details about training, validation, and test dataset splits in the traditional machine learning sense. The experiments are conducted in an online learning setting.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions that "Online ridge regression and Ada Normal Hedge are implemented in ORidge.py and Anh.py respectively," but does not specify version numbers for Python or any libraries used in their implementation.
Experiment Setup No The paper describes the learning task, loss functions, and how groups are defined, but it does not specify concrete hyperparameters like learning rates, batch sizes, or optimizer settings for the online regression algorithms used in the experiments.