Algorithms with Prediction Portfolios

Authors: Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results are primarily theoretical, however we have included a preliminary empirical validation of our algorithm for min-cost perfect matching in the supplementary material.
Researcher Affiliation Collaboration Michael Dinitz Johns Hopkins University mdinitz@cs.jhu.edu Sungjin Im UC Merced sim3@ucmerced.edu Thomas Lavastida University of Texas at Dallas thomas.lavastida@utdallas.edu Benjamin Moseley Carnegie Mellon University moseleyb@andrew.cmu.edu Sergei Vassilvitskii Google Research sergeiv@google.com
Pseudocode Yes Algorithm 1 Minimum cost matching with k predicted dual solutions ... Algorithm 2 Algorithm for combining fractional solutions online for load balancing.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplementary material
Open Datasets No The paper mentions 'a set of problem instances' and discusses 'sample complexity' for learning predictions, but it does not explicitly provide access information (e.g., links, DOIs, citations with authors/year) for any specific publicly available dataset used in its preliminary empirical validation.
Dataset Splits No The paper refers to 'preliminary empirical validation' but does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper states 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes]' in its self-assessment. However, the provided text does not contain the specific details of the hardware used (e.g., exact GPU/CPU models, memory amounts).
Software Dependencies No The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, frameworks, or solvers with their corresponding versions) that would be needed to replicate the experiments.
Experiment Setup No The paper discusses algorithmic details and theoretical proofs, but it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings for any empirical validation.