NICE: Robust Scheduling through Reinforcement Learning-Guided Integer Programming

Authors: Luke Kenworthy, Siddharth Nayak, Christopher Chin, Hamsa Balakrishnan9821-9829

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that, across a variety of scenarios, NICE produces schedules resulting in 33% to 48% fewer disruptions than the baseline formulation.
Researcher Affiliation Collaboration Luke Kenworthy1, Siddharth Nayak2, Christopher Chin2, Hamsa Balakrishnan2 1 US Air Force-MIT AI Accelerator 2 MIT lkenworthy99@gmail.com, {sidnayak, chychin, hamsa}@mit.edu
Pseudocode No The paper describes the methods in prose but does not contain a formal pseudocode or algorithm block.
Open Source Code Yes 1Our code is available at https://github.com/nsidn98/NICE
Open Datasets No We utilized an anonymized dataset from a flying squadron to construct a random event generator. The dataset contained 87 pilots with 32 different qualifications and 801 flights across over six months, each containing between 2 3 slots. The paper mentions it's an 'anonymized dataset' from a 'flying squadron' and provides no public access information (link, DOI, specific citation to a public repository).
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits with specific percentages or counts.
Hardware Specification No The paper acknowledges the use of 'MIT Super Cloud' and 'Lincoln Laboratory Supercomputing Center' for 'high performance computing resources' but does not provide specific hardware details like GPU or CPU models.
Software Dependencies No The paper mentions using 'Open AI Gym' and 'Proximal Policy Optimization (PPO)' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The hyperparameters for all our experiments are listed in the Appendix (Kenworthy et al. 2021).