Faster Matchings via Learned Duals

Authors: Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theoretical findings through experiments on both real and synthetic data. In this section we present experimental results on both synthetic and real data sets.
Researcher Affiliation Collaboration Michael Dinitz Johns Hopkins University mdinitz@cs.jhu.edu Sungjin Im UC Merced sim3@ucmerced.edu Thomas Lavastida Carnegie Mellon University tlavasti@andrew.cmu.edu Benjamin Moseley Carnegie Mellon University moseleyb@andrew.cmu.edu Sergei Vassilvitskii Google sergeiv@google.com
Pseudocode Yes Algorithm 1 Fast Approx. for Distance to Feasibility and Algorithm 2 Simple Primal-Dual Scheme for MWPM
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes We use several datasets from the UCI Machine Learning repository [16]. See Table 1 for a summary. [16] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
Dataset Splits No For these experiments, we used 20 training instances to learn the initial duals and then tested those on 10 new instances. The paper does not explicitly provide information on a validation set or standard train/validation/test splits, only how training and test instances were used for learning the duals.
Hardware Specification Yes All of our experiments were run on Google Cloud Platform [24] e2-standard-2 virtual machines with 2 virtual CPUs and 8 GB of memory.
Software Dependencies No The paper mentions implementing 'the Hungarian Algorithm' but does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper describes how samples are used to learn initial duals (e.g., '20 training instances', 'compute the median value'), and parameters for synthetic data generation (n, l, v), but it does not specify typical machine learning hyperparameters such as learning rates, batch sizes, or optimizer settings for the learning algorithm.