Learning-Augmented Priority Queues

Authors: Ziyad Benomar, Christian Coester

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the performance of our learning-augmented priority queue (LAPQ) by comparing it with Binary and Fibonacci heaps. We use two standard benchmarks for this evaluation: sorting and Dijkstra s algorithm.
Researcher Affiliation Academia Ziyad Benomar ENSAE, Ecole Polytechnique, Fair Play joint team ziyad.benomar@ensae.fr Christian Coester Department of Computer Science University of Oxford, UK christian.coester@cs.ox.ac.uk
Pseudocode Yes Algorithm 1: Randomized insertion in binary heap, Algorithm 2: Exp Search Insertion(Q, vj, u), Algorithm 3: Insertion with dirty and clean comparisons, Algorithm 4: Insertion with rank prediction
Open Source Code Yes The code used for conducting the experiments is available at github.com/Ziyad-Benomar/Learning-augmented-priority-queues.
Open Datasets Yes The city maps were obtained using the Python library Osmnx [Boeing, 2017]. We also evaluate the performance of Dijkstra s algorithm with our LAPQ on synthetic random graphs. For this, we use Poisson Voronoi Tessellations (PVT), which are a commonly used random graph model for street systems Gloaguen et al. [2006], Gloaguen and Cali [2018], Benomar et al. [2022a,b].
Dataset Splits No The paper evaluates algorithmic performance on various datasets/graphs but does not specify explicit train/validation/test dataset splits for reproduction.
Hardware Specification No The paper does not provide specific hardware details for running its experiments. In the NeurIPS Paper Checklist, it states: 'Our figures show the number of comparisons used in each experiment. The computer resources are not relevant.'
Software Dependencies No The paper mentions using the 'Python library Osmnx [Boeing, 2017]' but does not specify version numbers for Python or Osmnx, or any other significant software dependencies required for full reproducibility.
Experiment Setup Yes For the sorting benchmark, we also compare our results with those from Bai and Coester [2023]. For Dijkstra s algorithm, we assess performance on both real city maps and synthetic random graphs. In all the experiments, each data point represents the average result from 30 independent runs.