Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy
Authors: Shipra Agrawal, Morteza Zadimoghaddam, Vahab Mirrokni
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our first result is that this simple, distributed algorithm converges to a (1 ϵ)-approximate fractional b-matching solution in O( log n ϵ2 ) rounds. We also extend the proportional allocation algorithm and convergence results to the maximum weighted matching problem... Section 4. Analysis including Proof of Theorem 1 and Proof of Theorem 2. |
| Researcher Affiliation | Collaboration | 1Columbia University, New York, NY 2Google Research. Correspondence to: Shipra Agrawal <sa3305@columbia.edu>, Vahab Mirrokni <mirrokni@google.com>, Morteza Zadimoghaddam <zadim@google.com>. |
| Pseudocode | Yes | Algorithm 1 Prop Alloc : A proportional allocation algorithm for maximum cardinality matching and Algorithm 2 Prop Alloc + : A proportional allocation algorithm for high-entropy maximum weight matching |
| Open Source Code | No | The paper does not contain any statements about releasing open-source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | No | The paper does not discuss or use any datasets, as it presents a theoretical framework and algorithm analysis. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, therefore it does not mention training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not describe any experimental setup or mention specific hardware used, as it focuses on theoretical algorithm analysis. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies or their version numbers for experimental reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training configurations. |