CktGNN: Circuit Graph Neural Network for Electronic Design Automation
Authors: Zehao Dong, Weidong Cao, Muhan Zhang, Dacheng Tao, Yixin Chen, Xuan Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on OCB show the extraordinary advantages of Ckt GNN through representation-based optimization frameworks over other recent powerful GNN baselines and human experts manual designs. Our work paves the way toward a learning-based open-sourced design automation for analog circuits. |
| Researcher Affiliation | Academia | 1 Department of Computer Science & Engineering, Washington University in St. Louis. 2 Department of Electrical & Systems Engineering, Washington University in St. Louis. 3 Institute for Artificial Intelligence, Peking University. 4 School of Computer Science, The University of Sydney. |
| Pseudocode | No | The paper describes the Ckt GNN model and framework but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/zehao-dong/Ckt GNN. |
| Open Datasets | Yes | To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains 10K distinct operational amplifiers with carefully-extracted circuit specifications. [...] The OCB dataset is also going to be uploaded to OGB to augment the graph machine learning research. |
| Dataset Splits | No | The paper states the dataset contains 10,000 circuits and describes the tasks, but it does not specify explicit training, validation, and test splits (e.g., percentages or counts) within the main text. |
| Hardware Specification | Yes | We implement experiments on 12G Tesla P100 and Ge Force GTX 1080 Ti |
| Software Dependencies | No | The paper mentions using Python scripts and various GNN models/frameworks (GCN, GIN, NGNN, Graphormer, D-VAE, DAGNN, PACE) but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | Following experimental settings of D-VAE (so as DAGNN and PACE), we train sparse Gaussian Process (SGP) regression models [...] We also perform batch Bayesian Optimization with a batch size of 50 using the expected improvement heuristic [...] In the VAE architecture, we take Ckt GNN as the DAG encoder. [...] the VAE loss is formulated as reconstruction loss + α KL divergence, where α is set to be 0.005 [...] we perform mini-batch stochastic gradient descent with a batch size of 64. We train models for 200 epochs. The initial learning rate is set as 1E-4, and we use a schedule to modulate the changes of the learning rate over time such that the learning rate will shrink by a factor of 0.1 if the training loss is not decreased for 20 epochs. |