Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning-Assisted Algorithm Unrolling for Online Optimization with Budget Constraints
Authors: Jianyi Yang, Shaolei Ren
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, to validate LAAU, we present numerical results by considering online resource allocation for maximizing the weighted fairness metric. Our results highlight that LAAU can significantly outperform the existing baselines and is very close to the optimal oracle in terms of the fairness utility. and Numerical Results Weighted fairness is a classic performance metric in the resource allocation literature (Lan et al. 2010), including fair allocation in computer systems (Ghodsi et al. 2011) economics (Hylland and Zeckhauser 1979). |
| Researcher Affiliation | Academia | Jianyi Yang and Shaolei Ren University of California, Riverside EMAIL |
| Pseudocode | Yes | Algorithm 1: Online Inference Procedure of LAAU |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of the described methodology's code. |
| Open Datasets | Yes | We create the training and testing samples based on the Azure cloud workload dataset, which contains the average CPU reading for tasks at each step (Shahrad et al. 2020). |
| Dataset Splits | No | The paper mentions 'training and testing samples' but does not specify concrete dataset split information like percentages, sample counts for training, validation, and test sets, or a detailed splitting methodology for reproduction. No explicit mention of a validation set split. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU or GPU models, memory specifications, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or explicit training configurations for the models. |