Learning to Design Analog Circuits to Meet Threshold Specifications

Authors: Dmitrii Krylov, Pooya Khajeh, Junhan Ouyang, Thomas Reeves, Tongkai Liu, Hiba Ajmal, Hamidreza Aghasi, Roy Fox

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we propose a method for generating from simulation data a dataset on which a system can be trained via supervised learning to design circuits to meet threshold specifications. We moreover perform the to-date most extensive evaluation of automated analog circuit design, including experimenting in a significantly more diverse set of circuits than in prior work, covering linear, nonlinear, and autonomous circuit configurations, and show that our method consistently reaches success rate better than 90% at 5% error margin, while also improving data efficiency by upward of an order of magnitude.
Researcher Affiliation Academia Dmitrii Krylov 1 Pooya Khajeh 1 Junhan Ouyang 1 Thomas Reeves 1 Tongkai Liu 1 Hiba Ajmal 1 Hamidreza Aghasi 1 Roy Fox 1 1University of California, Irvine. Correspondence to: Dmitrii Krylov <dkrylov@uci.edu>, Hamidreza Aghasi <haghasi@uci.edu>, Roy Fox <royf@uci.edu>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes To facilitate result reproduction, the code and data used in our experiments are available at github.1 The supplementary details of the circuits employed in our experiments can be found in Tables 8, 9, and 10 in Appendix A.5.1https://github.com/indylab/Circuit-Synthesis
Open Datasets Yes To facilitate result reproduction, the code and data used in our experiments are available at github.1 The supplementary details of the circuits employed in our experiments can be found in Tables 8, 9, and 10 in Appendix A.5. In this work, we use the Ng Spice simulator (Nenzi & Vogt, 2011). The circuit topology and its fixed parameters, as well as the simulation parameters, are provided to the simulator via a format called netlist (Lannutti et al., 2012). In addition to the netlist, the simulator loads analysis commands that determine how it measures the performance metrics of interest. For some circuits, multiple analysis commands are given to measure the circuit under distinct conditions. First, to generate simulation data, a user inputs the range and step size of each circuit parameter, and the simulator loops through this grid to output a dataset D0 of parameter metrics pairs.
Dataset Splits Yes Our main method uses the D ϵ dataset to train a neural network and evaluate its success rate in 10-fold cross-validation. For each circuit topology, we perform three comparisons of this method. First, we compare the main method with the five other data construction methods described in the previous section. Second, we compare the gradient-based learning algorithm with Random Forests and a simple lookup method (Section 4.2). Third, we study the sensitivity to the amount of training data by varying it. We compare the success rate of 10-fold cross validation, which uses 90% of the data for training each fold, with using 5%, 10%, 20% and 50% of the data for training.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments.
Software Dependencies No The paper mentions using Ng Spice simulator, Adam optimizer, and Random Forests algorithm, but does not specify the version numbers for these software components or any other libraries used.
Experiment Setup Yes In this work, we propose a Multi-layer perceptron (MLP) architecture with seven layers for the task at hand. The first layer takes in a vector of size equal to the number of performance metrics for the circuit and outputs a vector of length 200. The last layer takes in a vector of length 200 and outputs the same number of parameters as in the circuit. The middle five layers are constant across all circuits and have the following [input, output] size configurations: [200, 300], [300, 500], [500, 500], [500, 300], [300, 200]. Each layer is separated by the Rectified Linear Unit (Re LU) activation function. We trained each MLP model for 100 epochs using the Adam optimizer (Kingma & Ba, 2015) with a default learning rate of 0.001. Additionally, we also trained a Random Forest (RF) model with the default number of trees (100) and default arguments.