AutoOS: Make Your OS More Powerful by Exploiting Large Language Models

Authors: Huilai Chen, Yuanbo Wen, Limin Cheng, Shouxu Kuang, Yumeng Liu, Weijia Li, Ling Li, Rui Zhang, Xinkai Song, Wei Li, Qi Guo, Yunji Chen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Evaluation, 4.1. Experimental Setup, 4.2. Overall Performance, 4.3. Ablation Study, Experimental results show that Auto OS can automatically customize and optimize the OS kernel configurations without human effort. More importantly, Auto OS even achieves better performance by up to 25% than vendor-provided configuration.
Researcher Affiliation Academia 1State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Intelligent Software Research Center, Institute of Software, CAS, Beijing, China.
Pseudocode Yes Algorithm 1 Pseudocode for Dynamic Tree Traversal With Certain Randomness in Auto OS
Open Source Code No The paper does not provide a direct link to the source code for the methodology or an explicit statement about its open-source availability.
Open Datasets Yes OS. The OS we used include three distinct Linux distributions, i.e., Poly OS (Poly OS, 2023), Fedora, and Ubuntu. Specifically, Poly OS is a lightweight OS designed for the RISC-V architecture. It contains 15,365 OS configuration options with Linux kernel version 5.17.2. Fedora (Fedoraproject, 2023), distinguished for its extensibility and robust community support, is implemented with Linux kernel version 5.18.8, comprising approximately 15,561 options.
Dataset Splits No The paper describes optimization search trials for OS kernel configurations but does not define traditional training, validation, and test dataset splits in the context of machine learning model training.
Hardware Specification Yes Testbed. The experiment was conducted on two different hardware: an AIo T device and a PC machine. The former is a Hifive embedded development board powered by the Si Five Freedom U740 (FU740), an So C that includes a high-performance multi-core, 64-bit dual-issue, superscalar RISC-V processor with 16GB of DDR4. The latter is a PC based on the Intel(R) Core(TM) i7-13700F, x86 64 architecture processor, with 15GB of DDR5 and 30GB of swap memory.
Software Dependencies No We directly integrated the publicly available GPT-3.5-Turbo into Auto OS. We observed that the kconfiglib library (ulfalizer, 2023) provides a command-line form of interaction similar to the interface. No specific version numbers for these or other software are provided.
Experiment Setup Yes Model. In this experiment, we directly integrated the publicly available GPT-3.5-Turbo into Auto OS. To ensure a degree of randomness, the temperature setting for the LLM was set to 1.0. Seach setting. In order to diversify the configuration options explored during each random traversal of the dynamic tree, Auto OS will automatically explore different optimization targets from Unixbench, including integer or floating-point operations, execl throughput, file transfers, context switches on pipes, process creation throughput, system call capabilities or directly increase the total score of Unixbench. We let Auto OS run the search with 24 search trials with the optimized OS kernel configurations for different OS distributions, and then report the best total score among them. Each search trial is independent and begins at the default configuration.