The Perils of Learning Before Optimizing

Authors: Chris Cameron, Jason Hartford, Taylor Lundy, Kevin Leyton-Brown3708-3715

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use simulations to experimentally quantify performance gaps and identify a wide range of real-world applications from the literature whose objective functions rely on multiple prediction targets, suggesting that end-to-end learning could yield significant improvements. In Section 4, we perform a simulation study analyzing how correlation impacts the performance gap. Our results are shown in Figure 1.
Researcher Affiliation Academia Chris Cameron1, Jason Hartford2, Taylor Lundy1, Kevin Leyton-Brown1 1Department of Computer Science, University of British Columbia 2Mila, Universit e de Montr eal
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Please see https://www.cs.ubc.ca/labs/beta/Projects/2Stage-E2E-Gap/ for our code and data generation process.
Open Datasets No The paper describes a 'synthetic benchmark' and refers to 'our distribution D' without providing concrete access information (link, DOI, formal citation) to a publicly available or open dataset.
Dataset Splits No The paper states, 'We generated 1000 samples from our distribution and left out 200 as test data.', but does not provide specific details for a validation split.
Hardware Specification Yes We trained on an 8-core machine with Intel i7 3.60GHz processors and 32 GB of memory and an Nvidia Titian Xp GPU.
Software Dependencies No The paper mentions using the 'cvxpylayers package' and the 'mixed-integer solver GLPK in the cvxpy python package', but does not provide specific version numbers for these software components.
Experiment Setup Yes We used the ADAM optimizer with a learning rate of 0.01 and performed 500 training iterations for each experiment. We set our quadratic penalty term ζ to be 10.