Contrastive Losses and Solution Caching for Predict-and-Optimize

Authors: Maxime Mulamba, Jayanta Mandi, Michelangelo Diligenti, Michele Lombardi, Victor Bucarey, Tias Guns

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we answer the following research questions: Q1 What is the performance of each task loss function in terms of expected regret? Q2 How does the growth of the solution caching impact on the solution quality and efficiency of the learning task? Q3 How do other solver-agnostic methods benefit from the solution caching scheme? Q4 How does the methodology outlined above perform in comparison with the state-of-the-art algorithms for decision-focused learning? To do so, we evaluate our methodology on three NP hard problems, the knapsack problem, a job scheduling problem and a maximum diverse bipartite matching problem.
Researcher Affiliation Academia Maxime Mulamba1 , Jayanta Mandi1 , Michelangelo Diligenti 2 , Michele Lombardi3 , Victor Bucarey1 , Tias Guns1,4 1Data Analytics Laboratory, Vrije Universiteit Brussel, Belgium 2Department of Information Engineering and Mathematical Sciences, University of Siena, Italy 3Dipartimento di Informatica Scienza e Ingegneria, University of Bologna, Italy 4Department of Computer Science, KU Leuven, Belgium
Pseudocode Yes Algorithm 1 Gradient-descent over combinatorial problem and Algorithm 2 Gradient-descent with inner approximation
Open Source Code Yes Code and data are publicly available at https://github.com/CryoCardiogram/ijcai-cache-loss-pno.
Open Datasets Yes We generate our dataset from [Ifrim et al., 2012], which contains historical energy price data at 30-minute intervals from 2011-2013. and This combinatorial problem is taken from CSPLib [Gent and Walsh, 1999], a library of constraint optimization problems. and The matching instances are constructed from the CORA citation network [Sen et al., 2008].
Dataset Splits Yes For all the experiments, the dataset is split on training (70%), validation (10%) and test (20%) data.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies Yes All methods are implemented with Pytorch 1.3.1 [Paszke et al., 2019] and Gurobi 9.0.1 [Gurobi Optimization, 2021].
Experiment Setup No The paper mentions psolve as a parameter and that 'The validation sets are used for selecting the best hyperparameters', but it does not explicitly list the specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or a detailed configuration for the experimental setup.