Circuit-GNN: Graph Neural Networks for Distributed Circuit Design
Authors: Guo Zhang, Hao He, Dina Katabi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our model with a commercial simulator showing that it reduces simulation time by four orders of magnitude. We also demonstrate the value of our model by using it to design a Terahertz channelizer, a difficult task that requires a specialized expert. The results show that our model produces a channelizer whose performance is as good as a manually optimized design, and can save the expert several weeks of topology and parameter optimization. |
| Researcher Affiliation | Academia | 1EECS, Massachusetts Institute of Technology, Cambridge, MA, USA. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. Methods are described in prose. |
| Open Source Code | No | For more dataset details, please see our project website: https://circuit-gnn.csail.mit.edu. This URL points to a project website for dataset details, not an explicit code repository for the methodology. |
| Open Datasets | No | To train our network, we generate labeled examples using the CST STUDIO SUIT (CST official website, 2018), a commercial EM simulator. We generate about 100,000 circuit samples made of 3 to 6 resonators on a distributed computing cluster with 800 virtual CPU cores. For more dataset details, please see our project website: https://circuit-gnn.csail.mit.edu. The dataset was generated by the authors, and the project website is stated for 'dataset details' rather than providing direct public access to download the dataset itself. |
| Dataset Splits | Yes | We train on 80% of the data with 4 and 5 resonators, and test on the rest, including the data with 3 and 6 resonators which are 100% reserved for testing. |
| Hardware Specification | Yes | In terms of the run-time for prediction, our model conducts one prediction in 50 milliseconds on a single NVIDIA 1080Ti GPU which is four orders of magnitude faster than running one simulation using CST on a modern desktop. |
| Software Dependencies | No | The paper mentions 'CST STUDIO SUIT' and 'Adam optimizer' but does not provide specific version numbers for these or any other software libraries or programming languages used (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | Training uses the Adam optimizer (Kingma & Ba, 2014) and a batch-size of 64. In total, the model is trained 500 epochs. The learning rate is initialized as 10 4 and decayed every 200 epochs by a factor of 0.5. |