From Inverse Optimization to Feasibility to ERM

Authors: Saurabh Kumar Mishra, Anant Raj, Sharan Vaswani

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we experimentally validate our approach on synthetic and real-world problems, and demonstrate improved performance compared to existing methods.
Researcher Affiliation Academia Saurabh Mishra 1 Anant Raj 2 Sharan Vaswani 1 1Simon Fraser University 2SIERRA Project Team (Inria), Coordinated Science Laboratory (CSL), UIUC.
Pseudocode Yes Algorithm 1 for CILP Input: A, b, Training dataset D (zi, x i )N i=1, Model fθ Initialize θ1 for t = 1, 2, .., T do ˆci = fθt(zi), i [N] for i = 1, 2, .., N do qi = PCi(ˆci) by solving the optimization problem in Eq. (3) end θt+1 = arg minθ 1 2N PN i=1 ||qi fθ(zi)||2 end Output: θT +1
Open Source Code Yes 1The code is available here
Open Datasets Yes We consider two real-world tasks (Vlastelica et al., 2019) Warcraft Shortest Path and Perfect Matching below and defer the synthetic experiments to Appendix C.
Dataset Splits Yes Both datasets consist of 10000 training samples, 1000 validation samples and 1000 test samples each.
Hardware Specification No No specific hardware details such as GPU/CPU models, processors, or memory were mentioned for the experimental setup.
Software Dependencies No The paper mentions using 'CVXPY library (Diamond & Boyd, 2016)', 'ECOS solver (Domahidi et al., 2013)', and 'OSQP solver (Stellato et al., 2020)' but does not specify their version numbers.
Experiment Setup Yes We train all the methods for 50 epochs with a batch size of 100. We employ a grid search to find the best constant step size in {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005}, across both the Adam and Adagrad optimizers.