Structured Neural-PI Control with End-to-End Stability and Output Tracking Guarantees
Authors: Wenqi Cui, Yan Jiang, Baosen Zhang, Yuanyuan Shi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on traffic and power networks demonstrate that the proposed approach improves both transient and steady-state performances, while unstructured neural networks lead to unstable behaviors. |
| Researcher Affiliation | Academia | Wenqi Cui1 Yan Jiang1 Baosen Zhang1 Yuanyuan Shi2 1University of Washington, WA 98195 2University of California San Diego, CA 92093 |
| Pseudocode | No | The paper describes the method and training process textually and with diagrams (Figure 4 shows a computation graph), but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at this link. |
| Open Datasets | Yes | The second experiment is the power system frequency control on the IEEE 39-bus New England system [52] |
| Dataset Splits | No | The paper describes training on 300 trajectories and testing on 100 trajectories, but does not explicitly mention a separate validation set or its split. For example, 'We train for 400 epochs, where each epoch trains with the loss (9) averaged on 300 trajectories, and each trajectory evolves 6s from random initial velocities.' and 'Figure 5(a) shows the transient and steady-state costs on 100 testing trajectories starting from randomly generated initial states.' |
| Hardware Specification | Yes | All experiments are run with an NVIDIA Tesla T4 GPU with 16GB memory. |
| Software Dependencies | No | The paper mentions the use of 'Adam' for optimization but does not specify other software dependencies like programming languages, libraries, or frameworks with their version numbers needed for replication. For example, 'The neural networks are updated using Adam with learning rate initializes at 0.05 and decays every 50 steps with a base of 0.7.' |
| Experiment Setup | Yes | We train for 400 epochs, where each epoch trains with 300 trajectories... The stepsize in time is set as t = 0.02s... The transient cost is set to be J(y, u) = PK k=1 ||y(k t) y||1 + ˆc||u(k t)||2 2... The neural networks are updated using Adam with learning rate initializes at 0.05 and decays every 50 steps with a base of 0.7. |