Learning with Adaptive Resource Allocation
Authors: Jing Wang, Miao Yu, Peng Zhao, Zhi-Hua Zhou
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the empirical performance of our proposed LARA approach. We begin with an experiment involving a pure task bundle, where five different models are trained concurrently on the same dataset to demonstrate our approach s efficiency and effectiveness. Next, we conduct an experiment with a mixed task bundle, where different models are trained concurrently on different datasets. |
| Researcher Affiliation | Academia | 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China. Correspondence to: Zhi-Hua Zhou <zhouzh@lamda.nju.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Adaptive Binary Tree Search. Algorithm 2 Learning with Adaptive Resource Allocation. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We focus on image classification of CIFAR-10 dataset (Krizhevsky et al., 2009). computer vision (CV) with CIFAR-10, natural language processing (NLP) with IMDB (Maas et al., 2011), reinforcement learning (RL) with Montezuma s Revenge1, and audio processing with Yesno2. image classification of MNIST dataset (Deng, 2012). |
| Dataset Splits | No | The paper mentions training data and observing loss values, but does not provide specific percentages or counts for training, validation, and test dataset splits. |
| Hardware Specification | No | The paper states, 'Moreover, all model training tasks are executed on the GPU, while prediction and allocation tasks are handled on the CPU,' but does not provide specific GPU or CPU models, memory details, or other hardware specifications. |
| Software Dependencies | No | The paper mentions implementation details for various models used (e.g., Vision Transformer), but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries. |
| Experiment Setup | Yes | The specific settings for each of the five models, including model type, data budget (Nk), deadline time (dk), and success threshold (ϵk), are outlined in Table 1. For exploration we set all exploration threshold Hk = 8000. For the WLS, we set the discounted factor γ to 0.9. |