Differentiable Convex Optimization Layers

Authors: Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, J. Zico Kolter

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work. 7 Evaluation Our implementation substantially lowers the barrier to using convex optimization layers. Here, we show that our implementation substantially reduces canonicalization time. Additionally, for dense problems, our implementation is competitive (in execution time) with a specialized solver for QPs; for sparse problems, our implementation is much faster.
Researcher Affiliation Collaboration Akshay Agrawal Stanford University akshayka@cs.stanford.edu Brandon Amos Facebook AI bda@fb.com Shane Barratt Stanford University sbarratt@stanford.edu Stephen Boyd Stanford University boyd@stanford.edu Steven Diamond Stanford University diamond@cs.stanford.edu J. Zico Kolter Carnegie Mellon University Bosch Center for AI zkolter@cs.cmu.edu
Pseudocode No The paper includes Python code examples, but no structured pseudocode or algorithm blocks labeled as such.
Open Source Code Yes Our software is open-source. CVXPY and our layers are available at https://www.cvxpy.org, https://www.github.com/cvxgrp/cvxpylayers.
Open Datasets No We consider 30 training points and 30 test points in R2, and we fit a logistic model with elastic-net regularization. No concrete access information for the dataset is provided.
Dataset Splits No We consider 30 training points and 30 test points in R2. No explicit mention of validation set or detailed split percentages for train/test/validation.
Hardware Specification Yes We ran this experiment on a machine with a 6-core Intel i7-8700K CPU, 32 GB of memory, and an Nvidia Ge Force 1080 TI GPU with 11 GB of memory.
Software Dependencies Yes We have implemented DPP and the reduction to ASA form in version 1.1 of CVXPY... We have also implemented differentiable convex optimization layers in Py Torch and Tensor Flow 2.0.
Experiment Setup Yes Numerical example. We consider 30 training points and 30 test points in R2, and we fit a logistic model with elastic-net regularization. Numerical example. Figure 2 plots the estimated average cost for each iteration of gradient descent for a numerical example, with x 2 R2 and u 2 R3, a time horizon of T = 25, and a batch size of 8. We initialize our policy s parameters with the LQR solution, ignoring the constraint on u.