Inductive-bias-driven Reinforcement Learning For Efficient Schedules in Heterogeneous Clusters
Authors: Subho Banerjee, Saurabh Jha, Zbigniew Kalbarczyk, Ravishankar Iyer
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated the Symphony along the following dimensions. (i) How well does Symphony perform compared to the state of the art? (ii) How does the Symphony s runtime affect scheduling decisions? (iii) What are the savings in training time compared to traditional methods? The evaluation testbed consisted of a rack-scale cluster of twelve IBM Power8 CPUs, two NVIDIA K40, six K80 GPUs, and two FPGAs. We illustrated the generality of techniques on a variety of real-world workloads that used CPUs, GPUs, and FPGAs... |
| Researcher Affiliation | Academia | 1University of Illinois at Urbana-Champaign, USA. Correspondence to: Subho S. Banerjee <ssbaner2@illinois.edu>. |
| Pseudocode | No | The paper describes procedures and methods but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper cites 'Open AI Baselines' with a GitHub link (Dhariwal et al., 2017) as an external tool used, but does not provide a link or explicit statement about releasing the source code for the Symphony framework itself. |
| Open Datasets | Yes | We evaluated the generality of techniques on a variety of real-world workloads that used CPUs, GPUs, and FPGAs: (i) variant calling and genotyping analysis (Van der Auwera et al., 2013) on human genome datasets using tools presented in Banerjee et al. (2016; 2017; 2019a); Li & Durbin (2009; 2010); Langmead et al. (2009); Mc Kenna et al. (2010); Nothaft et al. (2015); Nothaft (2015); Rimmer et al. (2014); Zaharia et al. (2011); (ii) epilepsy detection and localization (Varatharajah et al., 2017) on intra-cranial electroencephalography data; and (iii) in online security analytics (Cao et al., 2015) for intrusion detection systems. |
| Dataset Splits | No | The paper does not explicitly provide training, validation, or test dataset splits with specific percentages or counts for reproducibility of the experiments. |
| Hardware Specification | Yes | The evaluation testbed consisted of a rack-scale cluster of twelve IBM Power8 CPUs, two NVIDIA K40, six K80 GPUs, and two FPGAs. |
| Software Dependencies | No | The paper mentions general software components like 'Open AI Baselines', 'RNN', and 'LSTM layer', but does not specify any version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper states that 'Implementation details of the BN and NN models are presented in the supplementary material' and does not provide specific hyperparameter values or detailed training configurations within the main text. |