Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Derandomizing Multi-Distribution Learning
Authors: Kasper Green Larsen, Omar Montasser, Nikita Zhivotovskiy
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Researcher Affiliation | Academia | Kasper Green Larsen Department of Computer Science Aarhus University EMAIL Omar Montasser Department of Statistics and Data Science Yale University EMAIL Nikita Zhivotovskiy Department of Statistics University of California, Berkeley EMAIL |
| Pseudocode | Yes | Algorithm 1: DETERMINISTICLEARNER(P, ε, δ, A) |
| Open Source Code | No | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Open Datasets | No | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Dataset Splits | No | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Hardware Specification | No | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Software Dependencies | No | The paper is theoretical, and we have no experiments, data or code in the paper. |
| Experiment Setup | No | The paper is theoretical, and we have no experiments, data or code in the paper. |