Circuit as Set of Points

Authors: Jialv Zou, Xinggang Wang, Jiahao Guo, Wenyu Liu, Qian Zhang, Chang Huang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our method achieves state-of-the-art performance in congestion prediction tasks on both the Circuit Net and ISPD2015 datasets, as well as in design rule check (DRC) violation prediction tasks on the Circuit Net dataset. Our experiments were based on the publicly available Circuit Net [5] dataset and ISPD2015 [4] dataset. We conducted a series of ablation experiments to demonstrate the roles of different components in our model.
Researcher Affiliation Collaboration Jialv Zou 1, Xinggang Wang 1, Jiahao Guo1, Wenyu Liu1, Qian Zhang2, Chang Huang2 1School of EIC, Huazhong University of Science and Technology 2Horizon Robotics
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes To facilitate the research of open EDA design, source codes and pre-trained models are released at https://github.com/hustvl/circuitformer.
Open Datasets Yes Our experiments were based on the publicly available Circuit Net [5] dataset and ISPD2015 [4] dataset.
Dataset Splits No The paper mentions that validation was performed ("During validation, we performed average pooling..."), but it does not specify the exact split percentages, sample counts, or the methodology for creating the validation set for reproducibility.
Hardware Specification Yes The network is trained end-to-end on a single NVIDIA RTX 3090 GPU, for 100 epochs... The time tests were conducted on AMD EPYC 7502P 32-Core 2.5GHz CPU.
Software Dependencies No The paper mentions optimizers (Adam W) and learning rate schedules, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries).
Experiment Setup Yes The network is trained end-to-end on a single NVIDIA RTX 3090 GPU, for 100 epochs with a cosine annealing decay learning rate schedule [26] and 10-epoch warmup. We use the Adam W optimizer with learning rate γ = 0.001.