On Robustness in Qualitative Constraint Networks
Authors: Michael Sioutis, Zhiguo Long, Tomi Janhunen
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we report on a preliminary experimentation that was performed primarily to assess the differences that may or may not exist between the scenarios of a given QCN N with respect to their similarity measure and perturbation tolerance. Secondarily, results are reported on the time needed to compute a robust scenario of N, on the size of [[N]], and on the % of the time that a maximum scenario of N is satisfiable and hence also a robust scenario of N. |
| Researcher Affiliation | Academia | 1Otto-Friedrich-University Bamberg, WIAI, Bamberg, Germany 2Southwest Jiaotong University, SIST & IAI, Chengdu, China 3Tampere University, ICT, Tampere, Finland |
| Pseudocode | Yes | Algorithm 1: Robust Scen(N, Oracle) |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the described methodology, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | We considered 100 satisfiable QCNs of 50 constraints each that were created using uniformly selected interval relations appearing in job-shop scheduling problems in the SMT-LIB [Barrett et al., 2016]; |
| Dataset Splits | No | The paper describes the datasets used (QCNs from SMT-LIB and standard interval relations) but does not provide specific details on how these were split into training, validation, or test sets, or specify cross-validation settings. |
| Hardware Specification | Yes | We used a computer with an Intel R Xeon R CPU E3-1231 v3 processor at 3.40GHz per core, 16 GB of RAM, and the Xenial Xerus x86 64 OS (Ubuntu Linux). |
| Software Dependencies | Yes | All algorithms were coded in Python and run using Py Py 7.1.1. |
| Experiment Setup | No | The paper does not provide specific hyperparameters (e.g., learning rate, batch size, epochs), model initialization, or training schedules. It only describes the general computational environment and datasets. |