Learning Sensitivity of RCPSP by Analyzing the Search Process

Authors: Marc-André Ménard, Claude-Guy Quimper, Jonathan Gaudreault

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate our method with the RCPSP problem.5 Experiments We use the RCPSP benchmarks PSPLib [Kolisch and Sprecher, 1997] and Pack [Carlier and N eron, 2003].We compare the accuracy and the f1-score for the classification problem and the mean squared error for the regression problem.
Researcher Affiliation Academia Marc-Andr e M enard , Claude-Guy Quimper and Jonathan Gaudreault Universit e Laval, Qu ebec, Canada
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology described in this paper is publicly available.
Open Datasets Yes We use the RCPSP benchmarks PSPLib [Kolisch and Sprecher, 1997] and Pack [Carlier and N eron, 2003].
Dataset Splits No We randomly separate the instances of a benchmark into a training and a testing set with ratio 80/20. (No explicit mention of a validation split for reproduction.)
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments.
Software Dependencies No We use the model provided by Minizinc [Stuckey et al., 2014]. We use the random forest classifier and the random forest regressor from Scikit-Learn [Pedregosa et al., 2011]. (Specific version numbers for these software components are not provided within the text.)
Experiment Setup Yes We use the default parameters except for the number of trees (n Estimators) for which we set the value to 100. We apply a min-max normalization on all features to scale them between 0 and 1 using the relation x i = xi min( x) max( x) min( x). We use a timeout of 10 minutes per instance for PSBLib and 3 hours for Pack.