Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Bi-Objective Online Matching and Submodular Allocations
Authors: Hossein Esfandiari, Nitish Korula, Vahab Mirrokni
NeurIPS 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we give the ๏ฌrst results for bi-objective online submodular optimization, providing almost matching upper and lower bounds for allocating items to agents with two submodular value functions. We ๏ฌrst present almost tight online approximation algorithms for the general online bi-objective submodular welfare maximization problem (see Theorems 2.3 and 2.5, and Fig. 1). We show that a simple random selection rule along with the greedy algorithm (when each item arrives, randomly pick one objective to greedily optimize) results in almost optimal algorithms. Our allocation rule is thus both very fast to run and trivially easy to implement. The main technical result of this part is the hardness result showing that the achieved approximation factor is almost tight unless P=NP. |
| Researcher Affiliation | Collaboration | Hossein Esfandiari University of Maryland College Park, MD 20740 EMAIL Nitish Korula Google Research New York, NY 10011 EMAIL Vahab Mirrokni Google Research New York, NY 10011 EMAIL |
| Pseudocode | Yes | Figure 2: Exponential weight algorithm for online matching with free disposal. Figure 4: Maintaining solution to primal and dual LPs. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the described methodology or a link to a code repository. |
| Open Datasets | No | The paper focuses on theoretical contributions (algorithms, proofs, hardness results) and does not describe empirical experiments with specific datasets, therefore no public access information for training data is provided. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with dataset splits. Therefore, no information on training/validation/test splits is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe any computational experiments or their hardware specifications. |
| Software Dependencies | No | The paper is theoretical and focuses on algorithm design and proofs, and as such does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe empirical experiments or their setup details such as hyperparameters or training configurations. |