On Joint Learning for Solving Placement and Routing in Chip Design
Authors: Ruoyu Cheng, Junchi Yan
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on public chip design benchmarks show that our method can effectively learn from experience and also provides intermediate placement for the post standard cell placement, within few hours for training. We perform experiments based on the well-studied academic benchmark suites ISPD-2005. In Table 2 we compare the total wirelength of our end-to-end learning approach Deep Place with sequential learning placer that applies gradient based optimization after pre-training separately, as well as DREAMPlace which arranges movable macros by heuristic in the beginning of optimization. |
| Researcher Affiliation | Academia | Ruoyu Cheng Junchi Yan Department of Computer Science and Engineering Mo E Key Lab of Artiļ¬cial Intelligence, AI Institute Shanghai Jiao Tong University, Shanghai, China, 200240 {roy account,yanjunchi}@sjtu.edu.cn |
| Pseudocode | No | The paper describes its methods in prose and through diagrams (Figure 2, Figure 3), but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code partly public available at: https://github.com/Thinklab-SJTU/EDA-AI. |
| Open Datasets | Yes | We perform experiments based on the well-studied academic benchmark suites ISPD-2005 [31] |
| Dataset Splits | No | The paper mentions 'pretraining' and 'finetuning' but does not specify explicit dataset split percentages or counts for training, validation, or testing sets. |
| Hardware Specification | Yes | All experiments are run on 1 NVIDIA Ge Force RTX 2080Ti GPU |
| Software Dependencies | No | The paper mentions key software components like Pytorch [32], GCN [33], Adam [34], and DREAMPlace [2] but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | The Adam optimizer [34] is used with 2.5 10 4 learning rate. |