HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection
Authors: Xingbo Du, Chonghua Wang, Ruizhe Zhong, Junchi Yan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on simulated and real-world global routing benchmarks are performed to show our approach s efficiency, particularly Hub Router outperforms the state-of-theart generative global routing methods in wirelength, overflow, and running time. |
| Researcher Affiliation | Academia | Xingbo Du, Chonghua Wang, Ruizhe Zhong, Junchi Yan Dept. of Computer Science and Engineering & Mo E Key Lab of AI, Shanghai Jiao Tong University {duxingbo, philipwang, zerzerzerz271828, yanjunchi}@sjtu.edu.cn |
| Pseudocode | Yes | Algorithm 1 Training in Hub-generation Phase. Algorithm 2 Training in Pin-hub-connection Phase. Algorithm 3 Sampling Routes. |
| Open Source Code | No | The paper references third-party open-source tools used in their work (e.g., 'https://github.com/haiguanl/DQN_Global Routing' and 'https://github.com/shininglion/rectilinear_spanning_graph'), but there is no explicit statement or link indicating that the source code for Hub Router, the main methodology of this paper, is available. |
| Open Datasets | Yes | For training, we construct global routing instances by adopting NCTU-GR [34] to route on ISPD-07 routing benchmarks [38], which is in line with [6]. |
| Dataset Splits | No | The paper states 'We search hyperparameters on the validation dataset' but does not specify the size, percentage, or methodology for creating this validation dataset split. It only details the test split. |
| Hardware Specification | Yes | Each experiment in this section is run on a machine with i9-10920X CPU, NVIDIA RTX 3090 GPU and 128 GB RAM, and is repeated 3 times under different seeds with mean and standard deviation values in line with [6]. |
| Software Dependencies | No | The paper mentions 'Adam [25] is used to train' which is a software component, but it does not specify any version numbers for Adam or other key software libraries (e.g., Python, PyTorch, TensorFlow, etc.) used for implementation. |
| Experiment Setup | Yes | We search hyperparameters on the validation dataset with the learning rate in [0.001, 0.0001] and reduce the learning rates by 0.96 after every 10 epochs until the validation loss no longer decreases for over 20 epochs or the number of epochs reaches a maximum of 200. ... For GAN and VAE, we use similar structures and search the number of Res Net blocks in [3, 6, 9] and the number of downsampling/upsampling layers in [2, 3, 4]. For DPM, we search the number of DDIM steps in [25, 50, 75], the guide weight w in [0.0, 1.5, 2.0], and the maximum timestep in [500, 1000]. ... Each experiment is trained with a batch size of 64 and the optimizer Adam [25] with the decay rate of first(second)-order moment estimation 0.5(0.999) and the L2 penalty coefficient 0.01. |