Safeguarded Learned Convex Optimization

Authors: Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data. This section presents examples using Safe-L2O.2 We numerically investigate i) the convergence rate of Safe-L2O relative to corresponding conventional algorithms, ii) the efficacy of safeguarding procedures when inferences are performed on data for which L2O fails intermittently, and iii) the convergence of Safe-L2O schemes even when the application of L2O operators is not theoretically justified.
Researcher Affiliation Collaboration Howard Heaton*1, Xiaohan Chen*2, Zhangyang Wang2, Wotao Yin3 1Typal Research, Typal LLC 2Department of Electrical and Computer and Engineering, The University of Texas at Austin 3Alibaba US, DAMO Academy, Decision Intelligence Lab
Pseudocode Yes Algorithm 1 L2O Network (No Safeguard); Algorithm 2 Safeguarded L2O (Safe-L2O)
Open Source Code Yes Code is on Git Hub: github.com/VITA-Group/Safe L2O
Open Datasets Yes The dictionary A R256 512 is learned on the BSD500 dataset (Martin et al. 2001) by solving a dictionary learning problem (Xu and Yin 2014).
Dataset Splits No The appropriate frequency for the safeguard to trigger can be estimated by tuning L2O parameters for optimal performance on a training set without safeguarding and then using a validation set to test various safeguards with the L2O scheme.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper discusses various algorithms and frameworks but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Inferences used α = 0.99 and EMA(0.25). As Safe-L2O convergence holds whenever β > 0, we can set β to be arbitrarily small (e.g. below machine precision); for simplicity, we use β = 0 in the experiments. Specifically, we let x R70 be sparse vectors with random supports of cardinality s = 6 and a single fixed dictionary A R50 70.