FanoutNet: A Neuralized PCB Fanout Automation Method Using Deep Reinforcement Learning

Authors: Haiyun Li, Jixin Zhang, Ning Xu, Mingyu Liu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on real-world industrial PCB benchmarks demonstrate that our approach achieves 100% routability in all industrial cases and improves wire length by an average of 6.8%, which makes a significant improvement compared with the state-of-the-art methods.
Researcher Affiliation Collaboration 1 School of Information Engineering, Wuhan University of Technology, Wuhan, China 2 School of Computer Science, Hubei University of Technology, Wuhan, China 3 Wuhan Research Institute, Huawei Device Co., Ltd., Wuhan, China
Pseudocode No The paper includes architectural diagrams (Figure 2) but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the described methodology. It only provides a link to a dataset: 'PCB dataset was acquired at https://github.com/aspdacsubmission-pcb-layout/PCBBenchmarks/'.
Open Datasets Yes We build two benchmark datasets consisting of eleven open-source PCB cases1 and five industrial PCB cases...1PCB dataset was acquired at https://github.com/aspdacsubmission-pcb-layout/PCBBenchmarks/
Dataset Splits No The paper mentions 'Pre-Routability (Pre-RT) to compute approximate routability for RL training' but does not provide specific dataset split information (percentages, sample counts, or explicit methodology) for training, validation, or testing splits.
Hardware Specification Yes The experiments are performed on a 64-bit Windows workstation with AMD Ryzen 7 5800X 8-Core Processor 3.79 GHz, and 64 GB RAM.
Software Dependencies No The paper states 'Our method is implemented in Python with Pytorch for reinforcement learning, and C++ with the mingw64 compiler for PCB router,' but does not provide specific version numbers for PyTorch or the mingw64 compiler, which is necessary for reproducibility.
Experiment Setup Yes Our PPO hyperparameters, K epochs and the learning rate of policy and value model, are 20, 3e-4 and 1e-3, respectively.