Learning-Augmented Algorithms for Online TSP on the Line
Authors: Themistoklis Gouleakis, Konstantinos Lakis, Golnoosh Shahkarami
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | All missing proofs can be found in the full version of the paper along with an assessment of our algorithms using synthetic data. |
| Researcher Affiliation | Academia | Themistoklis Gouleakis1, Konstantinos Lakis2, Golnoosh Shahkarami3 1National University of Singapore 2ETH Z urich 3Max Planck Institute for Informatics, Universit at des Saarlandes tgoule@nus.edu.sg, klakis@student.ethz.ch, gshahkar@mpi-inf.mpg.de |
| Pseudocode | Yes | Algorithm 1: FARFIRST update function. Algorithm 2: NEARFIRST update function. Algorithm 3: PIVOT update function. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methods or links to a code repository. |
| Open Datasets | No | The paper mentions 'assessment of our algorithms using synthetic data' but does not provide concrete access information (e.g., link, DOI, formal citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper refers to an 'assessment of our algorithms using synthetic data' but does not provide specific details on dataset splits (e.g., percentages, sample counts, or cross-validation setup) for training, validation, or testing. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware specifications (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes algorithms and their theoretical competitive ratios, and mentions an 'assessment of our algorithms using synthetic data,' but it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings. |