Simple and near-optimal algorithms for hidden stratification and multi-group learning
Authors: Christopher J Tosh, Daniel Hsu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper studies the structure of solutions to the multi-group learning problem, and provides simple and near-optimal algorithms for the learning problem. All proofs are presented in the appendix. |
| Researcher Affiliation | Collaboration | 1Memorial Sloan Kettering Cancer Center, New York, NY 2Department of Computer Science, Columbia University, New York, NY. |
| Pseudocode | Yes | Algorithm 1 PREPEND, Algorithm 2 Reduction to sleeping experts, Algorithm 3 Consistent majority algorithm, Algorithm 4 MLC-HEDGE in the multi-group setting. |
| Open Source Code | No | The paper does not provide any statement about releasing source code for the described methodology or links to a code repository. |
| Open Datasets | No | The paper is theoretical and discusses 'n i.i.d. training examples drawn from a distribution D' but does not specify or provide access information for any public dataset. |
| Dataset Splits | No | The paper is theoretical and does not discuss specific dataset splits (training, validation, test) needed for reproducibility. |
| Hardware Specification | No | The paper focuses on theoretical algorithms and proofs, and therefore does not mention any hardware specifications used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper describes algorithms theoretically but does not provide specific experimental setup details like hyperparameter values or training configurations. |