Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Auto-reconfiguration for Latency Minimization in CPU-based DNN Serving

Authors: Ankit Bhardwaj, Amar Phanishayee, Deepak Narayanan, Ryan Stutsman

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Averaged across a range of batch sizes, Packrat improves inference latency by 1.43 to 1.83 on a range of commonly used DNNs. Section 5, titled "Evaluation," details experiments including "inference microbenchmarks with Py Torch and end-to-end performance with Torch Serve."
Researcher Affiliation Collaboration 1Work done while authors were at Microsoft Research 2MIT CSAIL 3Meta 4NVIDIA 5University of Utah. Correspondence to: Ankit Bhardwaj <EMAIL>. The authors are affiliated with academic institutions (MIT CSAIL, University of Utah) and industry companies (Meta, NVIDIA, Microsoft Research), indicating a collaborative effort.
Pseudocode No The paper describes algorithms using text and mathematical equations, such as in Section 3.3 for the optimizer, but does not present a distinct, structured pseudocode or algorithm block.
Open Source Code Yes Packrat code is open-source and can be accessed at https://github.com/msr-fiddle/packrat.
Open Datasets No The paper mentions evaluating on various Py Torch models (Res Net-50, Inception-v3, GPT-2, and BERT) using "pretrained models from the model zoo" in Section 5.1. However, it does not specify the particular datasets used for the inference during evaluation or provide concrete access information (links, DOIs, or citations) for those datasets.
Dataset Splits No The paper focuses on optimizing DNN inference serving and uses varying batch sizes for evaluation. It does not discuss dataset splits for training or testing, as this is not relevant to its scope of inference optimization.
Hardware Specification Yes Table 1: Server configuration for all our experiments. CPU 2 16-core Intel Xeon Gold 6142 at 2.6 GHz RAM 384GB (6x32 GB DDR4-2666 DIMMs/Socket) OS Ubuntu 20.04 LTS, Linux 5.4.0-100-generic
Software Dependencies Yes Table 1: Software Python 3.8.10, Py Torch 1.12.1, Torch Serve 0.6.1, Intel MKL-DNN v2.6.0, Open MP 4.5
Experiment Setup Yes Section 5.1 "Microbenchmarks" states: "The fat instance is run with 16 threads and batch size B and the thin instances use the i, t, b configuration suggested by Packrat s optimizer where T, B is partitioned across P ij smaller instances where P ij tj = T and P ij bj = B." Section 3.2 "Profiling" describes the profiling strategy: "In practice, we use t, b {1, . . . , T} {20, 21, . . . , 2n}." Figure 6 illustrates a configuration change timeline with specific batch sizes (B=8 to B=64) and elapsed time.