LaMAGIC: Language-Model-based Topology Generation for Analog Integrated Circuits

Authors: Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, Xin Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that La MAGIC achieves a success rate of up to 96% under a strict tolerance of 0.01. We also examine the scalability and adaptability of La MAGIC, specifically testing its performance on more complex circuits.
Researcher Affiliation Collaboration 1IBM T. J. Watson Research Center 2Duke University 3MIT-IBM Watson AI Lab 4New Jersey Institute of Technology 5University of Notre Dame.
Pseudocode No The paper describes methods and formulations but does not present any structured pseudocode or algorithm blocks.
Open Source Code No No explicit statement or link is provided for open-sourcing the code related to the described methodology.
Open Datasets No In our main experiment (Section 5.2), we construct a dataset by randomly sampling topologies of 3, 4, and 5-component circuits. This range was chosen to encapsulate the varying degrees of complexity typical in power converter circuits, thereby ensuring that our model will be learned to handle a variety of design scenarios.
Dataset Splits No In total, we randomly split around 120k data points for training and 12k for evaluation.
Hardware Specification Yes Our experiment runs on a machine with one NVIDIA V100 GPU.
Software Dependencies No The paper mentions NGSPICE and Flan-T5 but does not provide specific version numbers for these or other software dependencies like Python, PyTorch, or specific libraries.
Experiment Setup Yes The hyperparameters of the LM training are detailed as follows: We perform training for 120 epochs using Adam W optimizer with a learning rate of 3 10 4 with a cosine scheduler using 300 warmup steps, a batch size of 128, and a L2 regularization strength of 10 5.