Transfer Learning-Based Co-Run Scheduling for Heterogeneous Datacenters
Authors: Wei Kuang, Laura Brown, Zhenlin Wang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance degradation due to co-run interference can be predicted with relative low error. We test this approach with SPEC CPU2006 benchmarks as the training set, and Parsec 3.0 and Cloud Suite 2.0 as the test set with a Core2 Duo as the base hardware and Intel i5 as the new machine. The predicted co-run performance degradation using the cross-architecture functions for sensitivity curves and pressure has a mean relative error of <2%. |
| Researcher Affiliation | Academia | Wei Kuang, Laura E. Brown and Zhenlin Wang Department of Computer Science, Michigan Technological University, Houghton, MI 49931 {wkuang,lebrown,zlwang}@mtu.edu |
| Pseudocode | No | The paper does not include any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In a pending submission, we collect the sensitivity curves of several sets of benchmarks including SPEC CPU2006 integer and floating point programs and a subset of Parsec 3.0 and Cloud Suite 2.0 programs. |
| Dataset Splits | Yes | We test this approach with SPEC CPU2006 benchmarks as the training set, and Parsec 3.0 and Cloud Suite 2.0 as the test set with a Core2 Duo as the base hardware and Intel i5 as the new machine. |
| Hardware Specification | Yes | We test this approach with SPEC CPU2006 benchmarks as the training set, and Parsec 3.0 and Cloud Suite 2.0 as the test set with a Core2 Duo as the base hardware and Intel i5 as the new machine. |
| Software Dependencies | No | The paper mentions software components like 'logistic function' and 'Xen virtual machine hypervisors' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper describes the general experimental process and benchmarks used, but does not provide specific details such as hyperparameter values, learning rates, batch sizes, or detailed training configurations required for direct reproduction. |