Contract Scheduling with Distributional and Multiple Advice
Authors: Spyros Angelopoulos, Marcin Bienkowski, Christoph Dürr, Bertrand Simon
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Last, we present an experimental evaluation that confirms the theoretical findings, and illustrates the performance improvements that can be attained in practice. |
| Researcher Affiliation | Academia | 1LIP6, Sorbonne University 2University of Wroclaw 3CNRS 4IN2P3 Computing Center |
| Pseudocode | No | The paper describes algorithms verbally and through mathematical derivations (e.g., in Theorem 9 'The above observation leads to the following algorithm for finding an optimal schedule'), but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating that its source code is publicly available. |
| Open Datasets | No | The paper evaluates its algorithms using generated distributions and random values ('We first consider, as distributional advice µ, a normal distribution...', 'advice chosen according to U[0.95t, 1.05t]', 'generate P as k values chosen independently and uniformly at random'), rather than a specific publicly available dataset with concrete access information for training. |
| Dataset Splits | No | The paper does not specify traditional training, validation, or test dataset splits. It evaluates its algorithms on generated distributions and random problem instances. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running its experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies along with their version numbers. |
| Experiment Setup | No | The paper describes the parameters for the generated input distributions used in its experimental evaluation (e.g., 'normal distribution... with mean m, and standard deviation σ', 'uniform distribution in [0.95t, 1.05t]', 'k values chosen independently and uniformly at random'), but these are not hyperparameters or system-level training settings typical for machine learning models. |