Quantifying Algorithmic Improvements over Time
Authors: Lars Kotthoff, Alexandre Fréchette, Tomasz Michalak, Talal Rahwan, Holger H. Hoos, Kevin Leyton-Brown
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use the temporal Shapley value to analyze the progress made in (i) the different versions of the Quicksort algorithm; (ii) the annual SAT competitions 2007 2014; (iii) an annual competition of Constraint Programming, namely the Mini Zinc challenge 2014 2016. Our analysis reveals novel insights into the development made in these important areas of research over time. |
| Researcher Affiliation | Academia | 1University of Wyoming, 2University of British Columbia, 3University of Warsaw, 4Masdar Institute of Science and Technology, 5Universiteit Leiden |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://git.io/vpEgM. |
| Open Datasets | Yes | Our experiments include solvers and problem instances from the 2007, 2009, 2011, 2013, and 2014 competitions. The Mini Zinc challenge [Stuckey et al., 2014]. We used the problem instances from the 2013, 2014, 2015, and 2016 challenges (100 instances each). |
| Dataset Splits | No | The paper describes the characteristics of the datasets (e.g., lists of different lengths and types for Quicksort, different tracks for SAT and Mini Zinc), but it does not specify explicit train/validation/test dataset splits with percentages or sample counts. |
| Hardware Specification | No | The paper states: "Computing it is efficient and takes a few seconds on a standard laptop for our case studies." This is too general and does not provide specific hardware details like CPU/GPU models or memory. |
| Software Dependencies | No | The paper mentions the use of "Java" for the Quicksort implementation and refers to the "commercial Gurobi solver" (which was excluded from their experiments), but it does not list specific version numbers for any programming languages, libraries, or other software dependencies used in their core methodology or experiments. |
| Experiment Setup | Yes | We test the algorithms on lists of length 10 000, 50 000, 100 000, and 500 000. Each sorting algorithm was run on each of these lists 100 times. As in the 2014 SAT Competition, we gave each run 14 GB of memory and 5000 CPU sec. The cutoff time for all solvers on all instances was 1200 seconds. |