MAGNOLIA: Matching Algorithms via GNNs for Online Value-to-go Approximation
Authors: Alexandre Hayderi, Amin Saberi, Ellen Vitercik, Anders Wikum
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We train a GNN to estimate VTG and show empirically that this GNN returns high-weight matchings across a variety of tasks. Moreover, we identify a common family of graph distributions... providing theoretical justification for our approach. Empirical analysis. We present MAGNOLIA (Matching Algorithms via GNNs for Online Value-to-go Approximation), a GNN-based online matching framework... trained on graphs with 16 total nodes, MAGNOLIA beats state-of-the-art baseline algorithms... |
| Researcher Affiliation | Academia | 1Department of Computer Science, Stanford University, Stanford, CA, USA 2Department of Management Science & Engineering, Stanford University, Stanford, CA, USA. Correspondence to: Anders Wikum <wikum@stanford.edu>. |
| Pseudocode | Yes | We include pseudo-code for computing VG(S, t) in Algorithm 1. |
| Open Source Code | Yes | We present MAGNOLIA (Matching Algorithms via GNNs for Online Value-to-go Approximation), a GNN-based online matching framework... 1https://github.com/anders-wikum/GNN-OBM |
| Open Datasets | Yes | To simulate performance on real-world crowdsourcing tasks, we generate semi-synthetic graphs from OSMnx (Boeing, 2017), a library that encodes road networks for use in ride-sharing applications, and the g Mission dataset (Chen et al., 2014), whose base graph comes from crowdsourcing data for assigning workers to tasks. |
| Dataset Splits | Yes | We perform hyperparameter tuning using a validation set of size 300. |
| Hardware Specification | Yes | All the training was done on an NVIDIA Ge Force GTX Titan X. |
| Software Dependencies | No | The paper mentions software like 'Py Torch Geometric' and 'Optuna library' but does not specify their version numbers or other crucial software dependencies with versions. |
| Experiment Setup | Yes | We construct a training set of 2000 instances on 16 nodes. We perform hyperparameter tuning using a validation set of size 300. We perform around 1000 trials, tuning the parameters as described in Table 1. |