NeuroSchedule: A Novel Effective GNN-based Scheduling Method for High-level Synthesis
Authors: Jun Zeng, Mingyang Kou, Hailong Yao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Neuro Schedule obtains near-optimal solutions while achieving more than 50,000 improvement in runtime compared with the ILP-based scheduling method. At the same time, Neuro Schedule improves the scheduling results by 6.10% on average compared with state-of-the-art entropy-directed method. |
| Researcher Affiliation | Collaboration | Jun Zeng Tsinghua University Mingyang Kou Tsinghua University Hailong Yao Tsinghua University Xu-Cheng Yin University of Science and Technology Beijing Haili Wang Hercules Microelectronics Co., Ltd |
| Pseudocode | Yes | Algorithm 1 Neuro Schedule algorithm. |
| Open Source Code | No | Our Neuro Schedule will be open-sourced once the paper is accepted. |
| Open Datasets | Yes | To train the GNN model, we build a dataset including 50,000 CDFGs. ... The benchmarks used in our paper are open-sourced, and we cite them in our paper. |
| Dataset Splits | No | The paper describes generating datasets and evaluating models on them, but does not provide explicit training/validation/test dataset splits (e.g., percentages or absolute counts) for reproducibility. |
| Hardware Specification | Yes | We conduct all the experiments on a Ubuntu 20.04 LTS Linux Server with a CPU (Intel(R) Xeon(R) Gold 5218 CPU@2.30GHz) and a GPU (NVIDIA Tesla V100). |
| Software Dependencies | Yes | The proposed GNN-based scheduler is implemented in Python (version 3.9.12), and the GNN model is designed and trained with Pytorch [22] (version 1.8.0) and py G [23] (version 2.0.4). For efficiency, Gurobi [24] is adopted as the ILP solver. |
| Experiment Setup | Yes | As depicted in Figure 4, the proposed model outputs the operations priorities by regression. To effectively train the proposed model, a dedicated training pipeline is proposed. The proposed training pipeline mainly focuses on two questions: 1. How to acquire the regression labels; 2. How to select the training objective function (a.k.a loss function). |