Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization

Authors: Bryan Wilder, Bistra Dilkina, Milind Tambe1658-1665

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results across a variety of domains show that decisionfocused learning often leads to improved optimization performance compared to traditional methods. We conduct experiments across a variety of domains in order to compare our decision-focused learning approach with traditional two stage methods.
Researcher Affiliation Academia Bryan Wilder, Bistra Dilkina, Milind Tambe Center for Artificial Intelligence in Society, University of Southern California {bwilder, dilkina, tambe}@usc.edu
Pseudocode No The paper describes the proposed methods and algorithms in narrative text but does not include any clearly labeled pseudocode blocks or algorithm figures.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Our experiments use the cora dataset (Sen et al. 2008). We consider a recommendation systems problem based on the Movielens dataset (Group Lens 2011). The ground truth matrices were generated using the Yahoo webscope (Yahoo 2007) dataset.
Dataset Splits Yes In each domain, we randomly divided the instances into 80% training and 20% test.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software components like "Adam" for training and "metis" for partitioning, but it does not specify any version numbers for these or other software dependencies.
Experiment Setup Yes All networks used Re LU activations. All networks were trained using Adam with learning rate 10 3. We experimented with networks with 1 layer... and 2-layer networks, where the hidden layer (of size 200) gives additional expressive power.