Decision-Focused Learning: Through the Lens of Learning to Rank
Authors: Jayanta Mandi, Vı́ctor Bucarey, Maxime Mulamba Ke Tchomba, Tias Guns
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically investigate the quality of our generic methods compared to existing decision-focused learning approaches with competitive results. Furthermore, controlling the subset of solutions allows controlling the runtime considerably, with limited effect on regret. |
| Researcher Affiliation | Academia | 1Data Analytics Laboratory, Vrije Universiteit Brussel, Belgium 2Institute of Engineering Sciences, Universidad d O Higgins, Rancagua, Chile 3Dept. Computer Science, KU Leuven, Belgium. |
| Pseudocode | Yes | Algorithm 1 Gradient-descent implementation of decisionfocused learning problems with Ranking Loss |
| Open Source Code | Yes | The code is available at https://github.com/JayMan91/ltr-predopt. |
| Open Datasets | Yes | The dataset is generated as described in https://github.com/paulgrigas/Smart Predict Then Optimize (for Shortest Path). We take the energy price data from Ifrim et al. (2012). (for Energy-cost Aware Scheduling). The topologies are taken from the CORA citation network (Sen et al., 2008). (for Diverse Bipartite Matching). |
| Dataset Splits | Yes | For each degree, we have 1000, 250 and 10,000 training, validation and test instances |
| Hardware Specification | No | The paper mentions 'So we can use standard automatic differentiation libraries and it can be run on GPU' in Section 4.5, but it does not specify any particular GPU model, CPU, or detailed hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries (e.g., Python, PyTorch, specific optimization solvers like Gurobi or CPLEX) used in the experiments. |
| Experiment Setup | Yes | To reproduce the result reported in this paper use the hyperparameter settings as described in Table 3. |