Learning for Edge-Weighted Online Bipartite Matching with Robustness Guarantees

Authors: Pengfei Li, Jianyi Yang, Shaolei Ren

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we run empirical experiments to demonstrate the advantages of LOMAR compared to existing baselines.
Researcher Affiliation Academia Pengfei Li 1 Jianyi Yang 1 Shaolei Ren 1 1University of California, Riverside, CA 92521, United States.
Pseudocode Yes Algorithm 1 Inference of Robust Learning-based Online Bipartite Matching (LOMAR)
Open Source Code No Our implementation of all the considered algorithms, including LOMAR, is based on the source codes provided by (Alomrani et al., 2022)
Open Datasets Yes We choose the Movie Lens dataset (Harper & Konstan, 2015), which provides a total of 3952 movies, 6040 users and 100209 ratings.
Dataset Splits No The number of graph instances in the training and testing datasets are 20000 and 1000, respectively.
Hardware Specification Yes training the RL model in LOMAR usually takes less than 8 hours on a shared research cluster with one NVIDIA K80 GPU
Software Dependencies No The paper mentions 'Gurobi optimizer' but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes For applicable algorithms (i.e., DRL, DRL-OS, and LOMAR), we train the RL model for 300 epochs in the training dataset with a batch size of 100. In LOMAR, the parameter B = 0 is used to follow the strict definition of competitive ratio. ... Our RL architecture has 3 fully connected layers, each with 100 hidden nodes.