Robust Losses for Decision-Focused Learning
Authors: Noah Schutte, Krzysztof Postek, Neil Yorke-Smith
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that training two state-of-the-art decision-focused learning approaches using robust regret losses improves test sample empirical regret in general while keeping computational time equivalent relative to the number of training epochs. |
| Researcher Affiliation | Academia | 1Delft University of Technology 2Independent Researcher {n.j.schutte, n.yorke-smith}@tudelft.nl, krzysztof.postek@gmail.com |
| Pseudocode | No | No section or figure explicitly labeled 'Pseudocode' or 'Algorithm' was found, nor were any structured, code-like algorithmic steps presented. |
| Open Source Code | Yes | We use Python-based open-source package Py EPO [Tang and Khalil, 2022] for the data generation of two experimental problems and the training, where the robust losses are implemented on top of the existing code. The k-NN loss is currently available in Py EPO. |
| Open Datasets | Yes | Energy-cost aware scheduling. As a third experimental problem we look at energy-cost aware scheduling [Simonis et al., 1999] following precedent in a DFL setting [Mandi et al., 2022]. The dataset consist of 789 days of historical energy price data at 30-minute intervals from 2011 2013 [Ifrim et al., 2012]. |
| Dataset Splits | Yes | In all cases a validation and test set of size 100 and 1000 are used respectively. |
| Hardware Specification | No | No specific hardware details such as GPU models, CPU types, or memory specifications are mentioned for the experimental setup. The paper only mentions 'Gurobi version 10.0.1' which is software. |
| Software Dependencies | Yes | We use Python-based open-source package Py EPO [Tang and Khalil, 2022] for the data generation of two experimental problems and the training... We use the Adam optimizer with learning rate 0.01 for the gradient descent and Gurobi version 10.0.1 [Gurobi Optimization, 2023] as the optimization problem solver. |
| Experiment Setup | Yes | We compare SPO+ and PFYL (number of samples M = 1, perturbation amplitude σ = 1)... We use the Adam optimizer with learning rate 0.01 for the gradient descent... For the top-k loss we use k = 10; the same for the k-NN loss where w = 0.5. For the RO loss we set ρ = 0.5 and Γ = n/8 , where n = |c|. The batch size is 32. |