Towards Next-Generation Logic Synthesis: A Scalable Neural Circuit Generation Framework

Authors: Zhihai Wang, Jie Wang, Qingyue Yang, Yinqi Bai, Xing Li, Lei Chen, Jianye Hao, Mingxuan Yuan, Bin Li, Yongdong Zhang, Feng Wu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four different circuit benchmarks demonstrate that our method can precisely generate circuits with up to 1200 nodes. Moreover, our synthesized circuits significantly outperform the state-of-the-art results from several competitive winners in IWLS 2022 and 2023 competitions.
Researcher Affiliation Collaboration 1Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China 2Noah s Ark Lab, Huawei Technologies 3College of Intelligence and Computing, Tianjin University
Pseudocode No The paper describes various algorithms and processes (e.g., in Section 5 and Appendix D) but does not present them in structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper states under the NeurIPS Paper Checklist, “The code and dataset will be publicly accessible.” This indicates future availability, not current concrete access. Additionally, in Section H, it lists “Code: DNAS[6]:Licence.” which refers to a baseline method, not the authors’ own T-Net implementation code.
Open Datasets Yes We evaluate our approach using circuits from four benchmarks: Espresso[41], Logic Nets[42], Random, and Arithmetic. ... Benchmarks: Espresso[41]: Copyright, Logic Nets[42]: Licence, Random and Arithmatic[31]: Licence.
Dataset Splits Yes Both the training and validation datasets use the complete set of input-output combinations, meaning that if the input bit size is K, there are a total of 2K input combinations. The batch size for the network is uniformly set to 2^10. The number of training iterations is 100 thousand, and we report the optimal results evaluated during the training process.
Hardware Specification Yes Hardware specification Our experiments were conducted on a Linux-based system powered by a 3.60 GHz Intel Xeon Gold 6246R CPU and NVIDIA RTX 2080 GPU.
Software Dependencies No The paper states: “We train our method with ADAM [84] using the PyTorch.” While PyTorch is mentioned, a specific version number is not provided. ADAM is an optimizer, not a software dependency with a version.
Experiment Setup Yes The batch size is set to 1024, with the learning rate set at 0.02. The temperature coefficient τ starts at 1 and decays to 0.5 when the accuracy approaches 100%. The training process lasts for 100 thousand iterations, and the model with the highest evaluation score is selected as the final result. We use Sum Squared Errors instead of the common Mean Squared Errors(MSE) to ensure that the loss does not become too small in later stages. In Equation 7, the hyperparameter α is set to 2, and δ is set to 0.3. In wwr, β is set to 10.