Adversarially robust learning for security-constrained optimal power flow
Authors: Priya Donti, Aayushya Agarwal, Neeraj Vijay Bedmutha, Larry Pileggi, J. Zico Kolter
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of our method in addressing SCOPF settings that allow for one, two, or three simultaneous outages on a realistic 4622-node power system with over 38 billion potential N-3 outage scenarios. |
| Researcher Affiliation | Academia | Carnegie Mellon University, Pittsburgh, PA, USA |
| Pseudocode | Yes | Algorithm 1 CAN@Y |
| Open Source Code | No | The paper mentions implementing their approach and using external tools, but does not provide a link or explicit statement that their own implementation code for CAN@Y is publicly available. |
| Open Datasets | Yes | We use our method to attempt to solve N-3 SCOPF (i.e., set k = 3) on a 4622-node test case over all 6133 potential outages (i.e., over 38 billion N-3 contingency scenarios); the associated training curve is shown in Figure 2. In total, our approach takes only 21 minutes to converge. ... the 4622-node test case with a sub-selection of 3071 N-1 potential contingencies, provided as part of the Challenge 1 stage by constructing our relaxed contingency set Y with k = 1. ... [34] ARPA-E. Grid Optimization (GO) Competition. https://gocompetition.energy.gov/, 2019. |
| Dataset Splits | No | The paper mentions evaluating on randomly selected N-1, N-2, and N-3 contingency scenarios from a larger set, but it does not provide specific data splits (e.g., percentages or counts) for training, validation, and testing as commonly understood for machine learning datasets. |
| Hardware Specification | Yes | All experiments are run on a single core of a Macbook Pro with a 2.6 GHz Core i7 CPU. |
| Software Dependencies | No | The paper mentions software like Python, SUGAR, CVXPY, and Power World, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | No | The paper mentions general training steps and concepts like 'step size γ' and 'fixed number of steps' for the inner loop, but it does not provide specific numerical values for key hyperparameters such as the learning rate (γ), batch size, or detailed initialization strategies for the overall optimization process. |